Docker Compose for Local Development
While a Dockerfile defines a single image, most real-world applications are composed of multiple, interconnected services (e.g., a web application, a database, a caching server, a message queue). Managing the lifecycle of each service with separate docker run commands, volumes, and networks is complex, error-prone, and not repeatable.
Docker Compose is the tool designed to solve this. It is a tool for defining and running multi-container Docker applications.
With Compose, you use a single YAML file (by default,
docker-compose.yml) to configure your entire application's stack. Then, with a single command, you can create and start all the services from your configuration.Its primary use case is for local development and testing, allowing any developer to spin up a complete, production-like environment with one command.
The docker-compose.yml File
This file is the heart of Docker Compose. It declaratively defines the "desired state" of your application stack.
Here is the basic structure of a docker-compose.yml file:
version: "3.8" # Specifies the Compose file format version
services:
# This is where each container (service) is defined
web:
# ... configuration for the web service
db:
# ... configuration for the database service
volumes:
# This is where you pre-define named volumes
db-data:
networks:
# This is where you pre-define custom networks
app-network:Top-Level Keys Explained
services(Required): This block contains the definition for every container you want to run. Each key underservices(e.g.,web,db) is a new service.volumes(Optional): This top-level key allows you to create named volumes. These are the preferred mechanism for persisting data (as explained in File 05). By defining a volume here, its lifecycle is managed by Compose.networks(Optional): This top-level key allows you to create custom bridge networks. When you define a custom network, all services within it can discover each other by their service name (e.g., thewebservice can connect to thedbservice using the hostnamedb). Compose automatically provides this service discovery (DNS).
Dissecting a Service Definition
Inside the services block, each service is configured with a set of keys.
services:
web:
# Option 1: Build an image from a Dockerfile
build: .
# (Context is the current directory, looks for 'Dockerfile')
# Or, more specifically:
# build:
# context: ./webapp
# dockerfile: Dockerfile.dev
# Option 2: Use a pre-built image
# image: my-username/my-web-app:latest
ports:
# Map port 8000 on the host to port 80 in the container
- "8000:80"
volumes:
# 1. Mounts a named volume (defined at the top level)
- "app-logs:/var/log/app"
# 2. Mounts a host path (bind mount) - good for live-reloading
- "./web-source:/usr/src/app"
environment:
# Pass environment variables to the container
- "DATABASE_URL=postgresql://user:pass@db:5432/mydb"
- "DEBUG=True"
# Or, load variables from a file:
# env_file:
# - ./.env
networks:
# Attach this service to a custom network
- "app-network"
depends_on:
# Wait for the 'db' service's container to *start*
# before starting this 'web' service's container
- "db"Key Service Properties:
buildvs.image: You must specify one.imagepulls from a registry.buildbuilds a localDockerfile.ports: Maps host ports to container ports ("HOST:CONTAINER").volumes: This service-level key uses volumes.Named Volume (
db-data:/var/lib/postgresql/data): This maps the named volumedb-data(defined at the top level) into the container. This is the best practice for persisting data.Bind Mount (
./:/app): This maps a directory from your host machine (./) into the container (/app). This is the key to local development, as changes you make to your source code on the host are immediately reflected inside the container, enabling live-reloading.
environment: How you pass configuration (like secrets or database URLs) to your application.depends_on: Controls startup order. Important: This only waits for thedbcontainer to start. It does not wait for the PostgreSQL application inside the container to be ready to accept connections. More robust solutions (like wait-scripts) are needed for that in production.
Core Docker Compose Commands
These commands are run from the directory containing your docker-compose.yml file.
docker-compose upThis is the primary command. It creates (or re-creates if changed) and starts all services defined in the file.
By default, it runs in the foreground and aggregates logs from all services.
docker-compose up -d: The-d(detached) flag runs the containers in the background.
docker-compose downThis stops and removes all containers, networks, and (by default) the default network defined by the Compose file.
docker-compose down -v: The-vflag also removes any named volumes defined in thevolumessection. This is useful for a complete reset.
docker-compose build- Forces a rebuild of the images for services that have a
buildinstruction.
- Forces a rebuild of the images for services that have a
docker-compose logsStreams the logs from all running services.
docker-compose logs -f web: Follows the logs for a specific service (-ffor follow).
docker-compose ps- Lists the running containers that are part of the Compose project.
docker-compose exec [service_name] [command]Executes a command inside a running container.
Example:
docker-compose exec web /bin/shwill open a shell inside thewebservice container, which is invaluable for debugging.
Annotated Example: A Python/PostgreSQL Stack
This example demonstrates a complete local development setup for a Python web app that talks to a PostgreSQL database, with live-reloading for code changes and persistent data for the database.
File Structure:
/my-project
|- docker-compose.yml
|- /webapp
| |- Dockerfile
| |- requirements.txt
| |- app.pywebapp/Dockerfile
# Use a slim Python base image
FROM python:3.9-slim
# Set the working directory
WORKDIR /app
# Copy dependencies list first for cache optimization
COPY requirements.txt .
# Install dependencies
RUN pip install -r requirements.txt
# Copy the rest of the app
COPY . .
# Expose the port the app runs on
EXPOSE 5000
# Set the default command to run the app
CMD ["python", "app.py"]docker-compose.yml
version: "3.8"
services:
# The Python Web Application service
web:
build: ./webapp # Build from the 'webapp' directory
ports:
- "5000:5000" # Map port 5000 on host to 5000 in container
volumes:
# Bind mount the webapp code for live-reloading
- "./webapp:/app"
networks:
- "app-net" # Attach to the custom network
environment:
# The app can now connect to 'db' hostname
- "DATABASE_URL=postgresql://user:password@db:5432/mydb"
depends_on:
- db # Wait for the 'db' service to start
# The PostgreSQL Database service
db:
image: postgres:14-alpine # Use a standard PostgreSQL image
environment:
# These are used by the Postgres image to initialize the DB
- "POSTGRES_USER=user"
- "POSTGRES_PASSWORD=password"
- "POSTGRES_DB=mydb"
volumes:
# Map the named volume 'db-data' to persist data
- "db-data:/var/lib/postgresql/data"
networks:
- "app-net" # Attach to the custom network
# Define the top-level named volume
volumes:
db-data:
# Define the top-level custom network
networks:
app-net: