Docker Compose Mastery: Building Multi-Container Applications
Docker Compose Mastery: Building Multi-Container Applications
Docker Compose is the essential tool for defining and managing multi-container Docker applications. It transforms complex container orchestration into simple, declarative YAML files. This comprehensive guide will take you from Docker Compose basics to advanced production deployment strategies, enabling you to build scalable, maintainable containerized applications.
Table of Contents
- Introduction to Docker Compose
- Docker Compose Architecture and Concepts
- Getting Started with Docker Compose
- Compose File Reference and Syntax
- Service Configuration Deep Dive
- Networking in Docker Compose
- Volume Management and Data Persistence
- Environment Variables and Secrets
- Building Multi-Service Applications
- Advanced Compose Patterns
- Production Best Practices
- Debugging and Troubleshooting
Introduction to Docker Compose
Docker Compose simplifies the management of multi-container applications by allowing you to define your entire application stack in a single YAML file. Instead of running multiple docker run
commands with complex parameters, you define services, networks, and volumes declaratively.
Why Docker Compose?
🚀 Key Benefits:
- Simplified Configuration: Define entire stacks in readable YAML
- Reproducible Environments: Version-controlled infrastructure
- Service Dependencies: Automatic startup order management
- One-Command Deployment:
docker-compose up
starts everything - Development Efficiency: Hot reloading and local development features
- Isolation: Project-specific networks and namespaces
Docker Compose vs Docker
# Without Docker Compose - Multiple commands
docker network create myapp-network
docker volume create postgres-data
docker run -d --name postgres --network myapp-network -v postgres-data:/var/lib/postgresql/data -e POSTGRES_PASSWORD=secret postgres:15
docker run -d --name redis --network myapp-network redis:7
docker run -d --name app --network myapp-network -p 3000:3000 -e DATABASE_URL=postgres://postgres:secret@postgres:5432/myapp myapp:latest
# With Docker Compose - One command
docker-compose up -d
Docker Compose Architecture and Concepts
Understanding Docker Compose's architecture helps in building efficient multi-container applications.
Core Components
┌─────────────────────────────────────────────────────────────┐
│ docker-compose.yml │
├─────────────────────────────────────────────────────────────┤
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Service 1 │ │ Service 2 │ │ Service 3 │ │
│ │ (web) │ │ (database) │ │ (cache) │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
├─────────────────────────────────────────────────────────────┤
│ ┌─────────────────────────────────────────────────┐ │
│ │ Networks (app-network) │ │
│ └─────────────────────────────────────────────────┘ │
├─────────────────────────────────────────────────────────────┤
│ ┌────────────┐ ┌────────────┐ ┌────────────┐ │
│ │ Volume 1 │ │ Volume 2 │ │ Volume 3 │ │
│ └────────────┘ └────────────┘ └────────────┘ │
└─────────────────────────────────────────────────────────────┘
Key Concepts
Services: Containers that make up your application Networks: Communication channels between services Volumes: Persistent data storage Configs: External configuration files Secrets: Sensitive data management
Getting Started with Docker Compose
Installation
Linux:
# Download Docker Compose
sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
# Make executable
sudo chmod +x /usr/local/bin/docker-compose
# Verify installation
docker-compose --version
macOS/Windows: Docker Compose comes bundled with Docker Desktop.
Your First Compose File
Create a docker-compose.yml
file:
version: '3.9'
services:
web:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./html:/usr/share/nginx/html
environment:
- NGINX_HOST=localhost
- NGINX_PORT=80
redis:
image: redis:7-alpine
ports:
- "6379:6379"
Basic Commands
# Start services
docker-compose up
# Start in detached mode
docker-compose up -d
# Stop services
docker-compose down
# View logs
docker-compose logs
# List running services
docker-compose ps
# Execute command in service
docker-compose exec web sh
Compose File Reference and Syntax
File Structure
version: '3.9' # Compose file version
services: # Service definitions
service1:
# Service configuration
service2:
# Service configuration
networks: # Custom networks
network1:
# Network configuration
volumes: # Named volumes
volume1:
# Volume configuration
configs: # Configuration files
config1:
# Config definition
secrets: # Sensitive data
secret1:
# Secret definition
Version Compatibility
Compose File Version | Docker Engine Version |
---|---|
3.9 | 19.03.0+ |
3.8 | 19.03.0+ |
3.7 | 18.06.0+ |
3.6 | 18.02.0+ |
Service Configuration Deep Dive
Complete Service Configuration
version: '3.9'
services:
webapp:
# Image configuration
image: myapp:latest
# OR build from Dockerfile
build:
context: ./app
dockerfile: Dockerfile.prod
args:
- NODE_ENV=production
- BUILD_VERSION=1.0.0
cache_from:
- myapp:latest
target: production
# Container name
container_name: my-webapp
# Command override
command: ["npm", "start"]
# Entry point override
entrypoint: ["node"]
# Environment variables
environment:
- NODE_ENV=production
- API_KEY=${API_KEY}
env_file:
- .env.production
- .env.secrets
# Port mapping
ports:
- "3000:3000" # HOST:CONTAINER
- "127.0.0.1:9229:9229" # Bind to localhost only
# Volume mounts
volumes:
- ./app:/usr/src/app # Bind mount
- node_modules:/usr/src/app/node_modules # Named volume
- /usr/src/app/tmp # Anonymous volume
# Network configuration
networks:
- frontend
- backend
# Dependencies
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_started
# Resource limits
deploy:
resources:
limits:
cpus: '2.0'
memory: 1G
reservations:
cpus: '0.5'
memory: 256M
# Health check
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
# Restart policy
restart: unless-stopped
# Logging configuration
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
# Security options
security_opt:
- no-new-privileges:true
cap_drop:
- ALL
cap_add:
- NET_BIND_SERVICE
read_only: true
user: "1000:1000"
# Labels
labels:
com.example.description: "Web application"
com.example.version: "1.0.0"
Build Configuration
services:
custom-app:
build:
context: .
dockerfile: docker/Dockerfile
args:
- BUILDKIT_INLINE_CACHE=1
- NODE_VERSION=18
cache_from:
- registry.example.com/myapp:buildcache
target: production
labels:
- "com.example.version=1.0.0"
shm_size: '2gb'
image: registry.example.com/myapp:latest
Networking in Docker Compose
Docker Compose automatically creates a default network for your application, but you can define custom networks for better service isolation and communication control.
Default Networking
version: '3.9'
services:
web:
image: nginx
# Can access db at hostname "db"
db:
image: postgres
# Can access web at hostname "web"
Custom Networks
version: '3.9'
services:
frontend:
image: react-app
networks:
- frontend-net
- public-net
backend:
image: node-api
networks:
- backend-net
- frontend-net
database:
image: postgres
networks:
- backend-net
nginx:
image: nginx
networks:
- public-net
- frontend-net
ports:
- "80:80"
networks:
frontend-net:
driver: bridge
backend-net:
driver: bridge
internal: true # No external access
public-net:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/16
gateway: 172.20.0.1
Service Discovery
version: '3.9'
services:
api:
image: node-api
environment:
- DATABASE_URL=postgresql://user:pass@db:5432/myapp
- REDIS_URL=redis://cache:6379
depends_on:
- db
- cache
db:
image: postgres
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=myapp
cache:
image: redis
Network Aliases
version: '3.9'
services:
web:
image: nginx
networks:
app-net:
aliases:
- webserver
- nginx-server
api:
image: node-api
environment:
# Can use either "web", "webserver", or "nginx-server"
- PROXY_URL=http://webserver
networks:
app-net:
driver: bridge
Volume Management and Data Persistence
Proper volume management ensures data persistence and sharing between containers.
Volume Types
version: '3.9'
services:
app:
image: myapp
volumes:
# Named volume (managed by Docker)
- app-data:/var/lib/app/data
# Bind mount (host directory)
- ./config:/etc/app/config
# Anonymous volume
- /tmp/app
# Read-only mount
- ./static:/usr/share/nginx/html:ro
# Bind mount with custom options
- type: bind
source: ./logs
target: /var/log/app
bind:
propagation: rslave
volumes:
app-data:
driver: local
driver_opts:
type: none
device: /data/app
o: bind
Database Persistence
version: '3.9'
services:
postgres:
image: postgres:15
volumes:
- postgres-data:/var/lib/postgresql/data
- ./init-scripts:/docker-entrypoint-initdb.d
environment:
- POSTGRES_DB=myapp
- POSTGRES_USER=dbuser
- POSTGRES_PASSWORD=dbpass
mysql:
image: mysql:8
volumes:
- mysql-data:/var/lib/mysql
- ./mysql-conf:/etc/mysql/conf.d
environment:
- MYSQL_ROOT_PASSWORD=rootpass
- MYSQL_DATABASE=myapp
mongodb:
image: mongo:6
volumes:
- mongo-data:/data/db
- mongo-config:/data/configdb
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=adminpass
volumes:
postgres-data:
mysql-data:
mongo-data:
mongo-config:
Backup Strategies
version: '3.9'
services:
database:
image: postgres:15
volumes:
- db-data:/var/lib/postgresql/data
backup:
image: postgres:15
depends_on:
- database
volumes:
- ./backups:/backups
- db-data:/var/lib/postgresql/data:ro
command: >
bash -c "while true; do
PGPASSWORD=$$POSTGRES_PASSWORD pg_dump -h database -U $$POSTGRES_USER $$POSTGRES_DB > /backups/backup_$$(date +%Y%m%d_%H%M%S).sql;
find /backups -name 'backup_*.sql' -mtime +7 -delete;
sleep 86400;
done"
environment:
- POSTGRES_USER=dbuser
- POSTGRES_PASSWORD=dbpass
- POSTGRES_DB=myapp
volumes:
db-data:
Environment Variables and Secrets
Environment Variable Management
version: '3.9'
services:
app:
image: myapp
environment:
# Direct assignment
- NODE_ENV=production
# From host environment
- API_KEY
# With default value
- PORT=${PORT:-3000}
# Complex interpolation
- DATABASE_URL=postgresql://${DB_USER}:${DB_PASS}@${DB_HOST:-localhost}:${DB_PORT:-5432}/${DB_NAME}
# From env file
env_file:
- .env
- .env.production
.env file example:
# Application
NODE_ENV=production
PORT=3000
# Database
DB_HOST=postgres
DB_PORT=5432
DB_USER=appuser
DB_PASS=securepassword
DB_NAME=myapp
# Redis
REDIS_HOST=redis
REDIS_PORT=6379
# API Keys
API_KEY=your-api-key-here
SECRET_KEY=your-secret-key-here
Docker Secrets
version: '3.9'
services:
app:
image: myapp
secrets:
- db_password
- api_key
environment:
- DB_PASSWORD_FILE=/run/secrets/db_password
- API_KEY_FILE=/run/secrets/api_key
command: >
sh -c "
export DB_PASSWORD=$$(cat $$DB_PASSWORD_FILE) &&
export API_KEY=$$(cat $$API_KEY_FILE) &&
npm start
"
database:
image: postgres
secrets:
- db_password
environment:
- POSTGRES_PASSWORD_FILE=/run/secrets/db_password
secrets:
db_password:
file: ./secrets/db_password.txt
api_key:
file: ./secrets/api_key.txt
Multiple Environment Support
# Project structure
.
├── docker-compose.yml # Base configuration
├── docker-compose.dev.yml # Development overrides
├── docker-compose.prod.yml # Production overrides
├── .env # Default environment
├── .env.development # Development environment
└── .env.production # Production environment
Base docker-compose.yml:
version: '3.9'
services:
app:
build: .
environment:
- NODE_ENV=${NODE_ENV}
networks:
- app-network
networks:
app-network:
docker-compose.dev.yml:
version: '3.9'
services:
app:
build:
target: development
volumes:
- ./src:/app/src
ports:
- "3000:3000"
- "9229:9229"
environment:
- DEBUG=app:*
command: npm run dev
docker-compose.prod.yml:
version: '3.9'
services:
app:
build:
target: production
ports:
- "80:3000"
restart: always
deploy:
replicas: 3
resources:
limits:
cpus: '1.0'
memory: 512M
Usage:
# Development
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up
# Production
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up
Building Multi-Service Applications
Full-Stack Application Example
Let's build a complete microservices application with frontend, backend, database, cache, and message queue:
version: '3.9'
services:
# Frontend React Application
frontend:
build:
context: ./frontend
dockerfile: Dockerfile
target: ${BUILD_TARGET:-production}
ports:
- "3000:3000"
environment:
- REACT_APP_API_URL=http://localhost:8080
depends_on:
- backend
networks:
- frontend-net
volumes:
- ./frontend/src:/app/src:ro
restart: unless-stopped
# Backend API Service
backend:
build:
context: ./backend
dockerfile: Dockerfile
ports:
- "8080:8080"
environment:
- DATABASE_URL=postgresql://postgres:password@postgres:5432/myapp
- REDIS_URL=redis://redis:6379
- RABBITMQ_URL=amqp://guest:guest@rabbitmq:5672
- JWT_SECRET=${JWT_SECRET}
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_started
rabbitmq:
condition: service_healthy
networks:
- frontend-net
- backend-net
volumes:
- ./backend/uploads:/app/uploads
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
interval: 30s
timeout: 10s
retries: 3
# Worker Service
worker:
build:
context: ./worker
dockerfile: Dockerfile
environment:
- DATABASE_URL=postgresql://postgres:password@postgres:5432/myapp
- REDIS_URL=redis://redis:6379
- RABBITMQ_URL=amqp://guest:guest@rabbitmq:5672
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_started
rabbitmq:
condition: service_healthy
networks:
- backend-net
restart: unless-stopped
deploy:
replicas: 2
# PostgreSQL Database
postgres:
image: postgres:15-alpine
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
- POSTGRES_DB=myapp
volumes:
- postgres-data:/var/lib/postgresql/data
- ./database/init.sql:/docker-entrypoint-initdb.d/init.sql
networks:
- backend-net
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
# Redis Cache
redis:
image: redis:7-alpine
command: redis-server --appendonly yes
volumes:
- redis-data:/data
networks:
- backend-net
restart: unless-stopped
# RabbitMQ Message Queue
rabbitmq:
image: rabbitmq:3-management-alpine
environment:
- RABBITMQ_DEFAULT_USER=guest
- RABBITMQ_DEFAULT_PASS=guest
ports:
- "15672:15672" # Management UI
volumes:
- rabbitmq-data:/var/lib/rabbitmq
networks:
- backend-net
restart: unless-stopped
healthcheck:
test: ["CMD", "rabbitmq-diagnostics", "ping"]
interval: 30s
timeout: 10s
retries: 3
# Nginx Reverse Proxy
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/ssl:/etc/nginx/ssl:ro
- nginx-cache:/var/cache/nginx
depends_on:
- frontend
- backend
networks:
- frontend-net
restart: unless-stopped
# Monitoring with Prometheus
prometheus:
image: prom/prometheus:latest
volumes:
- ./monitoring/prometheus.yml:/etc/prometheus/prometheus.yml:ro
- prometheus-data:/prometheus
ports:
- "9090:9090"
networks:
- backend-net
restart: unless-stopped
# Grafana Dashboard
grafana:
image: grafana/grafana:latest
environment:
- GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_PASSWORD:-admin}
volumes:
- grafana-data:/var/lib/grafana
- ./monitoring/grafana/dashboards:/etc/grafana/provisioning/dashboards:ro
- ./monitoring/grafana/datasources:/etc/grafana/provisioning/datasources:ro
ports:
- "3001:3000"
depends_on:
- prometheus
networks:
- backend-net
restart: unless-stopped
networks:
frontend-net:
driver: bridge
backend-net:
driver: bridge
internal: true
volumes:
postgres-data:
redis-data:
rabbitmq-data:
nginx-cache:
prometheus-data:
grafana-data:
Microservices Communication Pattern
version: '3.9'
services:
# API Gateway
api-gateway:
build: ./api-gateway
ports:
- "8000:8000"
environment:
- USER_SERVICE_URL=http://user-service:3001
- PRODUCT_SERVICE_URL=http://product-service:3002
- ORDER_SERVICE_URL=http://order-service:3003
networks:
- microservices
depends_on:
- user-service
- product-service
- order-service
# User Service
user-service:
build: ./services/user
environment:
- DB_HOST=user-db
- REDIS_HOST=user-cache
networks:
- microservices
- user-net
depends_on:
- user-db
- user-cache
user-db:
image: postgres:15
environment:
- POSTGRES_DB=users
- POSTGRES_PASSWORD=password
volumes:
- user-db-data:/var/lib/postgresql/data
networks:
- user-net
user-cache:
image: redis:7-alpine
networks:
- user-net
# Product Service
product-service:
build: ./services/product
environment:
- MONGO_URL=mongodb://product-db:27017/products
networks:
- microservices
- product-net
depends_on:
- product-db
product-db:
image: mongo:6
volumes:
- product-db-data:/data/db
networks:
- product-net
# Order Service
order-service:
build: ./services/order
environment:
- DB_HOST=order-db
- RABBITMQ_URL=amqp://rabbitmq:5672
networks:
- microservices
- order-net
depends_on:
- order-db
- rabbitmq
order-db:
image: postgres:15
environment:
- POSTGRES_DB=orders
- POSTGRES_PASSWORD=password
volumes:
- order-db-data:/var/lib/postgresql/data
networks:
- order-net
# Shared Message Queue
rabbitmq:
image: rabbitmq:3-management
ports:
- "15672:15672"
networks:
- microservices
volumes:
- rabbitmq-data:/var/lib/rabbitmq
networks:
microservices:
driver: bridge
user-net:
internal: true
product-net:
internal: true
order-net:
internal: true
volumes:
user-db-data:
product-db-data:
order-db-data:
rabbitmq-data:
Advanced Compose Patterns
Health Checks and Dependencies
version: '3.9'
services:
app:
build: .
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
migration:
condition: service_completed_successfully
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
db:
image: postgres
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
redis:
image: redis
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
migration:
build: .
command: npm run migrate
depends_on:
db:
condition: service_healthy
Override Patterns
# docker-compose.yml (base)
version: '3.9'
services:
app:
image: myapp:${TAG:-latest}
environment:
- LOG_LEVEL=${LOG_LEVEL:-info}
# docker-compose.override.yml (auto-loaded)
version: '3.9'
services:
app:
build: .
volumes:
- .:/app
environment:
- DEBUG=true
- LOG_LEVEL=debug
# docker-compose.prod.yml
version: '3.9'
services:
app:
deploy:
replicas: 3
environment:
- LOG_LEVEL=error
restart: always
Extension Fields (DRY Principle)
version: '3.9'
x-common-variables: &common-variables
ELASTICSEARCH_HOST: elasticsearch:9200
REDIS_HOST: redis:6379
LOG_LEVEL: ${LOG_LEVEL:-info}
x-resource-limits: &resource-limits
deploy:
resources:
limits:
cpus: '1.0'
memory: 512M
reservations:
cpus: '0.5'
memory: 256M
services:
api-service:
image: api:latest
environment:
<<: *common-variables
SERVICE_NAME: api
<<: *resource-limits
worker-service:
image: worker:latest
environment:
<<: *common-variables
SERVICE_NAME: worker
<<: *resource-limits
analytics-service:
image: analytics:latest
environment:
<<: *common-variables
SERVICE_NAME: analytics
<<: *resource-limits
Build-time Arguments
version: '3.9'
services:
app:
build:
context: .
args:
- NODE_VERSION=${NODE_VERSION:-18}
- NPM_TOKEN=${NPM_TOKEN}
- BUILD_DATE=$(date -u +'%Y-%m-%dT%H:%M:%SZ')
- VCS_REF=$(git rev-parse --short HEAD)
cache_from:
- node:${NODE_VERSION:-18}
- myapp:latest
image: myapp:${VERSION:-latest}
Production Best Practices
Security Hardening
version: '3.9'
services:
secure-app:
build: .
# Drop all capabilities and add only needed
cap_drop:
- ALL
cap_add:
- NET_BIND_SERVICE
# Read-only root filesystem
read_only: true
# Run as non-root user
user: "1000:1000"
# Security options
security_opt:
- no-new-privileges:true
- seccomp:unconfined
# Temporary filesystems for writable directories
tmpfs:
- /tmp
- /var/run
# Mount secrets as read-only
secrets:
- source: app_key
target: /run/secrets/app_key
mode: 0400
secrets:
app_key:
file: ./secrets/app_key.txt
Logging Configuration
version: '3.9'
x-logging: &default-logging
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
labels: "service"
services:
app:
image: myapp
logging: *default-logging
labels:
service: "application"
# Centralized logging with ELK
elasticsearch:
image: elasticsearch:8.11.0
environment:
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
logstash:
image: logstash:8.11.0
volumes:
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro
depends_on:
- elasticsearch
kibana:
image: kibana:8.11.0
ports:
- "5601:5601"
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
depends_on:
- elasticsearch
volumes:
elasticsearch-data:
Deployment Strategies
version: '3.9'
services:
# Blue-Green Deployment Pattern
app-blue:
image: myapp:v1.0.0
networks:
- app-net
labels:
- "traefik.enable=true"
- "traefik.http.routers.app-blue.rule=Host(`app.example.com`) && Headers(`X-Version`, `blue`)"
app-green:
image: myapp:v2.0.0
networks:
- app-net
labels:
- "traefik.enable=true"
- "traefik.http.routers.app-green.rule=Host(`app.example.com`) && Headers(`X-Version`, `green`)"
# Rolling Update Pattern
app:
image: myapp:${VERSION}
deploy:
replicas: 4
update_config:
parallelism: 1
delay: 30s
failure_action: rollback
monitor: 60s
max_failure_ratio: 0.3
rollback_config:
parallelism: 1
delay: 30s
networks:
app-net:
Monitoring Stack
version: '3.9'
services:
# Application with metrics endpoint
app:
build: .
ports:
- "3000:3000"
environment:
- METRICS_PORT=9090
# Prometheus for metrics collection
prometheus:
image: prom/prometheus
volumes:
- ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml:ro
- prometheus-data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
ports:
- "9090:9090"
# Grafana for visualization
grafana:
image: grafana/grafana
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin
- GF_USERS_ALLOW_SIGN_UP=false
volumes:
- grafana-data:/var/lib/grafana
- ./grafana/provisioning:/etc/grafana/provisioning:ro
ports:
- "3000:3000"
depends_on:
- prometheus
# Node Exporter for host metrics
node-exporter:
image: prom/node-exporter
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)'
# cAdvisor for container metrics
cadvisor:
image: gcr.io/cadvisor/cadvisor
volumes:
- /:/rootfs:ro
- /var/run:/var/run:ro
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
- /dev/disk/:/dev/disk:ro
devices:
- /dev/kmsg
privileged: true
volumes:
prometheus-data:
grafana-data:
Debugging and Troubleshooting
Common Issues and Solutions
1. Service Discovery Not Working
# Check if services are on the same network
docker-compose exec app ping database
# Inspect network configuration
docker network inspect projectname_default
# Use fully qualified service names
docker-compose exec app nslookup database.projectname_default
2. Permission Issues with Volumes
services:
app:
build: .
# Fix: Set user to match host user
user: "${UID:-1000}:${GID:-1000}"
volumes:
- ./data:/app/data
3. Container Exits Immediately
# Check logs
docker-compose logs app
# Run interactively to debug
docker-compose run --rm app sh
# Keep container running for debugging
docker-compose run --rm --entrypoint sh app
Debugging Commands
# View detailed service information
docker-compose ps
docker-compose top
# Follow logs for specific services
docker-compose logs -f app db
# Execute commands in running containers
docker-compose exec app sh
docker-compose exec db psql -U postgres
# View resource usage
docker-compose stats
# Validate compose file
docker-compose config
# View actual compose configuration
docker-compose config --resolve-image-digests
# Force recreate containers
docker-compose up -d --force-recreate
# Remove everything including volumes
docker-compose down -v
# Scale services
docker-compose up -d --scale worker=3
Performance Optimization
version: '3.9'
services:
optimized-app:
build:
context: .
cache_from:
- myapp:latest
- myapp:cache
image: myapp:latest
# Optimize build context
# Use .dockerignore to exclude unnecessary files
# Limit resources
deploy:
resources:
limits:
cpus: '2.0'
memory: 1G
reservations:
cpus: '0.5'
memory: 256M
# Use tmpfs for temporary data
tmpfs:
- /tmp:size=100M
# Optimize logging
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "1"
Conclusion
Docker Compose transforms the complexity of multi-container applications into manageable, version-controlled configurations. By mastering the concepts and patterns covered in this guide, you can:
- ✅ Build complex multi-service applications with ease
- ✅ Manage development, testing, and production environments
- ✅ Implement microservices architectures effectively
- ✅ Handle networking, volumes, and secrets securely
- ✅ Deploy applications with confidence
- ✅ Debug and troubleshoot containerized applications
Best Practices Summary
- Keep it Simple: Start with basic configurations and add complexity as needed
- Use Version Control: Track all compose files and environment configurations
- Separate Concerns: Use multiple compose files for different environments
- Document Everything: Comment your compose files and maintain README files
- Security First: Always run containers as non-root users and limit capabilities
- Monitor and Log: Implement proper logging and monitoring from the start
- Test Locally: Validate configurations before deploying to production
Next Steps
- Explore Docker Swarm for native Docker orchestration
- Learn Kubernetes for enterprise-scale container orchestration
- Implement CI/CD pipelines with Docker Compose
- Study container security best practices
- Practice with real-world projects and scenarios
Remember, Docker Compose is your gateway to efficient container orchestration. Master it, and you'll have a solid foundation for modern application deployment.
Additional Resources
Happy orchestrating! 🐳🎼