What You'll Learn
Introduction to Docker
Docker is an open-source platform that automates the deployment, scaling, and management of applications using containerization. Containers package an application with all its dependencies, ensuring it runs consistently across different environments.
🐳 Why Docker?
"It works on my machine" - Docker eliminates this problem! Used by Netflix, Spotify, PayPal, and millions of developers worldwide to ship software faster and more reliably.
- Image: A read-only template with instructions for creating a container
- Container: A runnable instance of an image
- Dockerfile: A text file with instructions to build an image
- Registry: A repository for storing and distributing Docker images
✅ Lightweight (MBs)
✅ Start in seconds
✅ Share OS kernel
❌ Heavy (GBs)
❌ Start in minutes
❌ Full OS per VM
Installation & Setup
Install Docker Desktop
Download Docker Desktop for your operating system from docker.com
Windows Installation
Linux Installation (Ubuntu)
macOS Installation
Verify Installation
Check if Docker is installed correctly
✅ Docker Installed!
You're ready to start containerizing your applications!
Docker Images
Docker images are read-only templates that contain the application code, runtime, libraries, and dependencies. They serve as blueprints for creating containers.
Pulling Images
Managing Images
💡 Image Tags
image:latest - Default tag (not recommended for production)
image:1.0.0 - Specific version
image:alpine - Lightweight Alpine-based variant
Docker Containers
Containers are running instances of Docker images. They are isolated environments that contain everything needed to run your application.
Running Containers
Managing Containers
Container Interaction
Dockerfile
A Dockerfile is a text file containing instructions to build a Docker image. Each instruction creates a layer in the image.
Basic Dockerfile
# Base image FROM python:3.11-slim # Set working directory WORKDIR /app # Copy requirements first (for caching) COPY requirements.txt . # Install dependencies RUN pip install --no-cache-dir -r requirements.txt # Copy application code COPY . . # Expose port EXPOSE 8000 # Set environment variables ENV PYTHONUNBUFFERED=1 # Run the application CMD ["python", "app.py"]
Node.js Dockerfile
# Use Node.js Alpine image FROM node:20-alpine # Create app directory WORKDIR /usr/src/app # Copy package files COPY package*.json ./ # Install dependencies RUN npm ci --only=production # Copy app source COPY . . # Create non-root user RUN addgroup -g 1001 -S nodejs RUN adduser -S nextjs -u 1001 USER nextjs # Expose port EXPOSE 3000 # Start command CMD ["node", "server.js"]
Building Images
💡 Dockerfile Instructions
FROM - Base image
WORKDIR - Set working directory
COPY - Copy files from host to image
RUN - Execute commands during build
ENV - Set environment variables
EXPOSE - Document exposed ports
CMD - Default command to run
ENTRYPOINT - Configure container executable
Docker Compose
Docker Compose is a tool for defining and running multi-container applications. Use a YAML file to configure your application's services.
Basic docker-compose.yml
version: '3.8' services: web: build: . ports: - "8000:8000" environment: - DEBUG=true - DATABASE_URL=postgres://user:pass@db:5432/mydb depends_on: - db - redis volumes: - ./app:/app networks: - app-network db: image: postgres:15-alpine environment: - POSTGRES_USER=user - POSTGRES_PASSWORD=pass - POSTGRES_DB=mydb volumes: - postgres_data:/var/lib/postgresql/data networks: - app-network redis: image: redis:7-alpine networks: - app-network volumes: postgres_data: networks: app-network: driver: bridge
Docker Compose Commands
Full Stack Example
version: '3.8' services: # React Frontend frontend: build: context: ./frontend dockerfile: Dockerfile ports: - "3000:3000" volumes: - ./frontend:/app - /app/node_modules environment: - REACT_APP_API_URL=http://localhost:8000 # Django Backend backend: build: context: ./backend ports: - "8000:8000" volumes: - ./backend:/app environment: - DEBUG=1 - DATABASE_URL=postgres://postgres:postgres@db:5432/myapp - REDIS_URL=redis://redis:6379 depends_on: - db - redis command: python manage.py runserver 0.0.0.0:8000 # PostgreSQL Database db: image: postgres:15-alpine volumes: - postgres_data:/var/lib/postgresql/data environment: - POSTGRES_DB=myapp - POSTGRES_USER=postgres - POSTGRES_PASSWORD=postgres # Redis Cache redis: image: redis:7-alpine # Nginx Reverse Proxy nginx: image: nginx:alpine ports: - "80:80" volumes: - ./nginx.conf:/etc/nginx/nginx.conf depends_on: - frontend - backend volumes: postgres_data:
Docker Volumes
Volumes are the preferred mechanism for persisting data generated by and used by Docker containers. They are managed by Docker and exist outside the container lifecycle.
Docker-managed
Host directory
Memory only
Working with Volumes
⚠️ Data Persistence
Container data is ephemeral! Always use volumes for databases and important data. Without volumes, data is lost when the container is removed.
Docker Networks
Docker networks enable containers to communicate with each other and the outside world. Containers on the same network can reach each other by container name.
Network Commands
💡 Network Drivers
bridge - Default, isolated network on single host
host - Container uses host's network directly
overlay - Multi-host networking (Swarm)
none - No networking
Docker Hub & Registry
Docker Hub is a cloud-based registry service for sharing Docker images. You can also use private registries for proprietary images.
Push & Pull Images
Private Registry
Essential Commands
Command Cheat Sheet
# ═══════════════════════════════════════════════════════════ # IMAGES # ═══════════════════════════════════════════════════════════ docker images # List images docker pull <image> # Pull image docker build -t <name> . # Build image docker rmi <image> # Remove image docker image prune # Remove unused images # ═══════════════════════════════════════════════════════════ # CONTAINERS # ═══════════════════════════════════════════════════════════ docker run <image> # Run container docker run -d <image> # Run detached docker run -p 8080:80 <image> # Port mapping docker run --name myapp <image> # Named container docker run -e VAR=val <image> # Environment var docker run -v /host:/container # Volume mount docker ps # List running docker ps -a # List all docker stop <container> # Stop container docker start <container> # Start container docker restart <container> # Restart container docker rm <container> # Remove container docker rm -f <container> # Force remove docker logs <container> # View logs docker logs -f <container> # Follow logs docker exec -it <container> bash # Shell access docker inspect <container> # Container info docker stats # Resource usage # ═══════════════════════════════════════════════════════════ # DOCKER COMPOSE # ═══════════════════════════════════════════════════════════ docker compose up # Start services docker compose up -d # Start detached docker compose up --build # Build and start docker compose down # Stop services docker compose down -v # Stop and remove volumes docker compose logs # View logs docker compose ps # List services docker compose exec <svc> bash # Shell into service # ═══════════════════════════════════════════════════════════ # CLEANUP # ═══════════════════════════════════════════════════════════ docker system prune # Clean all unused docker system prune -a # Clean everything docker container prune # Remove stopped containers docker image prune -a # Remove all unused images docker volume prune # Remove unused volumes docker network prune # Remove unused networks
Multi-stage Builds
Multi-stage builds allow you to use multiple FROM statements in a Dockerfile. This helps create smaller, production-ready images by leaving build dependencies behind.
React App Multi-stage Build
# ═══════════════════════════════════════════════════════════ # Stage 1: Build # ═══════════════════════════════════════════════════════════ FROM node:20-alpine AS builder WORKDIR /app # Copy package files COPY package*.json ./ # Install dependencies RUN npm ci # Copy source code COPY . . # Build the app RUN npm run build # ═══════════════════════════════════════════════════════════ # Stage 2: Production # ═══════════════════════════════════════════════════════════ FROM nginx:alpine AS production # Copy built assets from builder stage COPY --from=builder /app/build /usr/share/nginx/html # Copy nginx config COPY nginx.conf /etc/nginx/nginx.conf # Expose port EXPOSE 80 # Start nginx CMD ["nginx", "-g", "daemon off;"]
Go App Multi-stage Build
# Build stage FROM golang:1.21-alpine AS builder WORKDIR /app COPY go.mod go.sum ./ RUN go mod download COPY . . # Build static binary RUN CGO_ENABLED=0 GOOS=linux go build -o /app/main . # Final stage - scratch is empty image! FROM scratch # Copy binary from builder COPY --from=builder /app/main /main # Result: ~10MB image instead of ~1GB! ENTRYPOINT ["/main"]
✅ Benefits of Multi-stage Builds
• Smaller image sizes (often 10x smaller!)
• No build tools in production image
• Reduced attack surface
• Faster deployments
Best Practices
Dockerfile Best Practices
# ✅ Use specific version tags, not 'latest' FROM python:3.11.6-slim-bookworm # ✅ Set labels for metadata LABEL maintainer="[email protected]" LABEL version="1.0" # ✅ Set working directory early WORKDIR /app # ✅ Copy dependency files first (layer caching) COPY requirements.txt . # ✅ Combine RUN commands to reduce layers # ✅ Clean up in same layer RUN pip install --no-cache-dir -r requirements.txt \ && rm -rf /root/.cache/pip # ✅ Copy source code after dependencies COPY . . # ✅ Create non-root user for security RUN addgroup --system --gid 1001 appgroup \ && adduser --system --uid 1001 --gid 1001 appuser # ✅ Change ownership of app files RUN chown -R appuser:appgroup /app # ✅ Switch to non-root user USER appuser # ✅ Document exposed port EXPOSE 8000 # ✅ Use exec form for CMD CMD ["gunicorn", "--bind", "0.0.0.0:8000", "app:app"]
.dockerignore File
# Git .git .gitignore # Docker Dockerfile* docker-compose* .docker # Dependencies node_modules __pycache__ *.pyc .venv venv # IDE .vscode .idea *.swp *.swo # Build build dist *.egg-info # Logs *.log logs # Environment .env .env.local *.env # Tests tests test coverage .coverage # Documentation docs *.md !README.md
Security Best Practices
🔒 Security Checklist
✅ Use official base images
✅ Run as non-root user
✅ Use specific version tags
✅ Scan images for vulnerabilities: docker scout cves myimage
✅ Don't store secrets in images
✅ Use multi-stage builds
✅ Keep images minimal (Alpine-based)
✅ Update base images regularly
Performance Tips
⚡ Optimization Tips
• Order Dockerfile commands from least to most frequently changing
• Use .dockerignore to reduce build context
• Combine RUN commands with && to reduce layers
• Use --no-cache-dir with pip
• Clean up in the same RUN command
• Use Alpine images when possible (~5MB vs ~100MB+)
🐳 Congratulations!
You've mastered Docker from basics to advanced! You're now ready to containerize any application and deploy it anywhere. Keep building!