Om Pandey
Docker

Docker Containers

Build, Ship, and Run Applications Anywhere

Master Docker - the industry-standard containerization platform. Learn to package applications with all their dependencies into standardized units for development, shipment, and deployment.

Containerization Microservices Cloud Ready CI/CD Integration

What You'll Learn

1

Introduction to Docker

Docker is an open-source platform that automates the deployment, scaling, and management of applications using containerization. Containers package an application with all its dependencies, ensuring it runs consistently across different environments.

🐳 Why Docker?

"It works on my machine" - Docker eliminates this problem! Used by Netflix, Spotify, PayPal, and millions of developers worldwide to ship software faster and more reliably.

🏗️ Docker Architecture
📝 Dockerfile
📦 Image
🐳 Container
☁️ Deploy
Key Concepts:
  • Image: A read-only template with instructions for creating a container
  • Container: A runnable instance of an image
  • Dockerfile: A text file with instructions to build an image
  • Registry: A repository for storing and distributing Docker images
🔄 Containers vs Virtual Machines
Containers
✅ Lightweight (MBs)
✅ Start in seconds
✅ Share OS kernel
vs
VMs
❌ Heavy (GBs)
❌ Start in minutes
❌ Full OS per VM
2

Installation & Setup

1

Install Docker Desktop

Download Docker Desktop for your operating system from docker.com

Windows Installation

PowerShell (Admin)
$ # Download and install Docker Desktop from docker.com $ # Or use winget $ winget install Docker.DockerDesktop Found Docker Desktop [Docker.DockerDesktop] Starting package install...

Linux Installation (Ubuntu)

Terminal
$ # Update package index $ sudo apt-get update $ # Install dependencies $ sudo apt-get install ca-certificates curl gnupg $ # Add Docker's official GPG key $ sudo install -m 0755 -d /etc/apt/keyrings $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg $ # Add Docker repository $ echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null $ # Install Docker Engine $ sudo apt-get update $ sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin $ # Add user to docker group (optional - run without sudo) $ sudo usermod -aG docker $USER

macOS Installation

Terminal
$ # Install using Homebrew $ brew install --cask docker Installing docker...
2

Verify Installation

Check if Docker is installed correctly

Terminal
$ # Check Docker version $ docker --version Docker version 24.0.7, build afdd53b $ # Check Docker Compose version $ docker compose version Docker Compose version v2.23.0 $ # Run hello-world to test $ docker run hello-world Hello from Docker! This message shows that your installation appears to be working correctly.

✅ Docker Installed!

You're ready to start containerizing your applications!

3

Docker Images

Docker images are read-only templates that contain the application code, runtime, libraries, and dependencies. They serve as blueprints for creating containers.

Pulling Images

Terminal
$ # Pull an image from Docker Hub $ docker pull nginx Using default tag: latest latest: Pulling from library/nginx Digest: sha256:abc123... Status: Downloaded newer image for nginx:latest $ # Pull specific version/tag $ docker pull python:3.11-slim $ # Pull from different registry $ docker pull gcr.io/google-containers/nginx

Managing Images

Terminal
$ # List all images $ docker images REPOSITORY TAG IMAGE ID SIZE nginx latest a8758716bb6a 187MB python 3.11-slim feba24b677d4 125MB node 20-alpine c8c8a343c813 181MB $ # Remove an image $ docker rmi nginx $ # Remove all unused images $ docker image prune -a $ # Inspect image details $ docker inspect nginx $ # View image history/layers $ docker history nginx

💡 Image Tags

image:latest - Default tag (not recommended for production)
image:1.0.0 - Specific version
image:alpine - Lightweight Alpine-based variant

4

Docker Containers

Containers are running instances of Docker images. They are isolated environments that contain everything needed to run your application.

Running Containers

Terminal
$ # Basic run command $ docker run nginx $ # Run in detached mode (background) $ docker run -d nginx $ # Run with port mapping $ docker run -d -p 8080:80 nginx # Access at http://localhost:8080 $ # Run with custom name $ docker run -d --name my-nginx -p 8080:80 nginx $ # Run with environment variables $ docker run -d -e MYSQL_ROOT_PASSWORD=secret mysql $ # Run interactively with terminal $ docker run -it ubuntu bash $ # Run with automatic removal on exit $ docker run --rm -it python:3.11 python

Managing Containers

Terminal
$ # List running containers $ docker ps CONTAINER ID IMAGE COMMAND STATUS PORTS NAMES a1b2c3d4e5f6 nginx ... Up 2 minutes 0.0.0.0:8080->80/tcp my-nginx $ # List all containers (including stopped) $ docker ps -a $ # Stop a container $ docker stop my-nginx $ # Start a stopped container $ docker start my-nginx $ # Restart a container $ docker restart my-nginx $ # Remove a container $ docker rm my-nginx $ # Force remove running container $ docker rm -f my-nginx $ # Remove all stopped containers $ docker container prune

Container Interaction

Terminal
$ # View container logs $ docker logs my-nginx $ # Follow logs in real-time $ docker logs -f my-nginx $ # Execute command inside container $ docker exec my-nginx ls /usr/share/nginx/html $ # Get interactive shell inside container $ docker exec -it my-nginx bash $ # Copy files to/from container $ docker cp index.html my-nginx:/usr/share/nginx/html/ $ docker cp my-nginx:/etc/nginx/nginx.conf ./ $ # View container resource usage $ docker stats $ # Inspect container details $ docker inspect my-nginx
5

Dockerfile

A Dockerfile is a text file containing instructions to build a Docker image. Each instruction creates a layer in the image.

Basic Dockerfile

Dockerfile
# Base image
FROM python:3.11-slim

# Set working directory
WORKDIR /app

# Copy requirements first (for caching)
COPY requirements.txt .

# Install dependencies
RUN pip install --no-cache-dir -r requirements.txt

# Copy application code
COPY . .

# Expose port
EXPOSE 8000

# Set environment variables
ENV PYTHONUNBUFFERED=1

# Run the application
CMD ["python", "app.py"]

Node.js Dockerfile

Dockerfile
# Use Node.js Alpine image
FROM node:20-alpine

# Create app directory
WORKDIR /usr/src/app

# Copy package files
COPY package*.json ./

# Install dependencies
RUN npm ci --only=production

# Copy app source
COPY . .

# Create non-root user
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
USER nextjs

# Expose port
EXPOSE 3000

# Start command
CMD ["node", "server.js"]

Building Images

Terminal
$ # Build image from Dockerfile $ docker build -t myapp:1.0 . $ # Build with specific Dockerfile $ docker build -f Dockerfile.prod -t myapp:prod . $ # Build with build arguments $ docker build --build-arg NODE_ENV=production -t myapp . $ # Build without cache $ docker build --no-cache -t myapp . $ # Run the built image $ docker run -d -p 8000:8000 myapp:1.0

💡 Dockerfile Instructions

FROM - Base image
WORKDIR - Set working directory
COPY - Copy files from host to image
RUN - Execute commands during build
ENV - Set environment variables
EXPOSE - Document exposed ports
CMD - Default command to run
ENTRYPOINT - Configure container executable

6

Docker Compose

Docker Compose is a tool for defining and running multi-container applications. Use a YAML file to configure your application's services.

Basic docker-compose.yml

docker-compose.yml
version: '3.8'

services:
  web:
    build: .
    ports:
      - "8000:8000"
    environment:
      - DEBUG=true
      - DATABASE_URL=postgres://user:pass@db:5432/mydb
    depends_on:
      - db
      - redis
    volumes:
      - ./app:/app
    networks:
      - app-network

  db:
    image: postgres:15-alpine
    environment:
      - POSTGRES_USER=user
      - POSTGRES_PASSWORD=pass
      - POSTGRES_DB=mydb
    volumes:
      - postgres_data:/var/lib/postgresql/data
    networks:
      - app-network

  redis:
    image: redis:7-alpine
    networks:
      - app-network

volumes:
  postgres_data:

networks:
  app-network:
    driver: bridge

Docker Compose Commands

Terminal
$ # Start all services $ docker compose up $ # Start in detached mode $ docker compose up -d $ # Build and start $ docker compose up --build $ # Stop all services $ docker compose down $ # Stop and remove volumes $ docker compose down -v $ # View logs $ docker compose logs -f $ # View logs for specific service $ docker compose logs -f web $ # List running services $ docker compose ps $ # Execute command in service $ docker compose exec web bash $ # Scale a service $ docker compose up -d --scale web=3

Full Stack Example

docker-compose.yml (Django + React + PostgreSQL)
version: '3.8'

services:
  # React Frontend
  frontend:
    build:
      context: ./frontend
      dockerfile: Dockerfile
    ports:
      - "3000:3000"
    volumes:
      - ./frontend:/app
      - /app/node_modules
    environment:
      - REACT_APP_API_URL=http://localhost:8000

  # Django Backend
  backend:
    build:
      context: ./backend
    ports:
      - "8000:8000"
    volumes:
      - ./backend:/app
    environment:
      - DEBUG=1
      - DATABASE_URL=postgres://postgres:postgres@db:5432/myapp
      - REDIS_URL=redis://redis:6379
    depends_on:
      - db
      - redis
    command: python manage.py runserver 0.0.0.0:8000

  # PostgreSQL Database
  db:
    image: postgres:15-alpine
    volumes:
      - postgres_data:/var/lib/postgresql/data
    environment:
      - POSTGRES_DB=myapp
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=postgres

  # Redis Cache
  redis:
    image: redis:7-alpine

  # Nginx Reverse Proxy
  nginx:
    image: nginx:alpine
    ports:
      - "80:80"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf
    depends_on:
      - frontend
      - backend

volumes:
  postgres_data:
7

Docker Volumes

Volumes are the preferred mechanism for persisting data generated by and used by Docker containers. They are managed by Docker and exist outside the container lifecycle.

📂 Volume Types
Named Volumes
Docker-managed
Bind Mounts
Host directory
tmpfs
Memory only

Working with Volumes

Terminal
$ # Create a volume $ docker volume create mydata $ # List volumes $ docker volume ls DRIVER VOLUME NAME local mydata local postgres_data $ # Inspect volume $ docker volume inspect mydata $ # Use named volume $ docker run -d -v mydata:/app/data myapp $ # Use bind mount (host directory) $ docker run -d -v $(pwd)/data:/app/data myapp $ # Read-only mount $ docker run -d -v $(pwd)/config:/app/config:ro myapp $ # Remove volume $ docker volume rm mydata $ # Remove all unused volumes $ docker volume prune

⚠️ Data Persistence

Container data is ephemeral! Always use volumes for databases and important data. Without volumes, data is lost when the container is removed.

8

Docker Networks

Docker networks enable containers to communicate with each other and the outside world. Containers on the same network can reach each other by container name.

Network Commands

Terminal
$ # List networks $ docker network ls NETWORK ID NAME DRIVER SCOPE abc123 bridge bridge local def456 host host local ghi789 none null local $ # Create a network $ docker network create mynetwork $ # Run container on specific network $ docker run -d --network mynetwork --name web nginx $ docker run -d --network mynetwork --name db postgres $ # Now 'web' can reach 'db' by name $ docker exec web ping db $ # Connect existing container to network $ docker network connect mynetwork mycontainer $ # Disconnect from network $ docker network disconnect mynetwork mycontainer $ # Inspect network $ docker network inspect mynetwork $ # Remove network $ docker network rm mynetwork

💡 Network Drivers

bridge - Default, isolated network on single host
host - Container uses host's network directly
overlay - Multi-host networking (Swarm)
none - No networking

9

Docker Hub & Registry

Docker Hub is a cloud-based registry service for sharing Docker images. You can also use private registries for proprietary images.

Push & Pull Images

Terminal
$ # Login to Docker Hub $ docker login Username: yourusername Password: ******** Login Succeeded $ # Tag image for Docker Hub $ docker tag myapp:1.0 yourusername/myapp:1.0 $ # Push to Docker Hub $ docker push yourusername/myapp:1.0 The push refers to repository [docker.io/yourusername/myapp] 1.0: digest: sha256:abc123... size: 1234 $ # Pull from Docker Hub $ docker pull yourusername/myapp:1.0 $ # Logout $ docker logout

Private Registry

Terminal
$ # Run local registry $ docker run -d -p 5000:5000 --name registry registry:2 $ # Tag for local registry $ docker tag myapp localhost:5000/myapp $ # Push to local registry $ docker push localhost:5000/myapp $ # Pull from local registry $ docker pull localhost:5000/myapp
10

Essential Commands

Command Cheat Sheet

Docker Commands Reference
# ═══════════════════════════════════════════════════════════
# IMAGES
# ═══════════════════════════════════════════════════════════
docker images                    # List images
docker pull <image>              # Pull image
docker build -t <name> .         # Build image
docker rmi <image>               # Remove image
docker image prune               # Remove unused images

# ═══════════════════════════════════════════════════════════
# CONTAINERS
# ═══════════════════════════════════════════════════════════
docker run <image>               # Run container
docker run -d <image>            # Run detached
docker run -p 8080:80 <image>    # Port mapping
docker run --name myapp <image>  # Named container
docker run -e VAR=val <image>    # Environment var
docker run -v /host:/container   # Volume mount

docker ps                        # List running
docker ps -a                     # List all
docker stop <container>          # Stop container
docker start <container>         # Start container
docker restart <container>       # Restart container
docker rm <container>            # Remove container
docker rm -f <container>         # Force remove

docker logs <container>          # View logs
docker logs -f <container>       # Follow logs
docker exec -it <container> bash # Shell access
docker inspect <container>       # Container info
docker stats                     # Resource usage

# ═══════════════════════════════════════════════════════════
# DOCKER COMPOSE
# ═══════════════════════════════════════════════════════════
docker compose up                # Start services
docker compose up -d             # Start detached
docker compose up --build        # Build and start
docker compose down              # Stop services
docker compose down -v           # Stop and remove volumes
docker compose logs              # View logs
docker compose ps                # List services
docker compose exec <svc> bash   # Shell into service

# ═══════════════════════════════════════════════════════════
# CLEANUP
# ═══════════════════════════════════════════════════════════
docker system prune              # Clean all unused
docker system prune -a           # Clean everything
docker container prune           # Remove stopped containers
docker image prune -a            # Remove all unused images
docker volume prune              # Remove unused volumes
docker network prune             # Remove unused networks
11

Multi-stage Builds

Multi-stage builds allow you to use multiple FROM statements in a Dockerfile. This helps create smaller, production-ready images by leaving build dependencies behind.

React App Multi-stage Build

Dockerfile
# ═══════════════════════════════════════════════════════════
# Stage 1: Build
# ═══════════════════════════════════════════════════════════
FROM node:20-alpine AS builder

WORKDIR /app

# Copy package files
COPY package*.json ./

# Install dependencies
RUN npm ci

# Copy source code
COPY . .

# Build the app
RUN npm run build

# ═══════════════════════════════════════════════════════════
# Stage 2: Production
# ═══════════════════════════════════════════════════════════
FROM nginx:alpine AS production

# Copy built assets from builder stage
COPY --from=builder /app/build /usr/share/nginx/html

# Copy nginx config
COPY nginx.conf /etc/nginx/nginx.conf

# Expose port
EXPOSE 80

# Start nginx
CMD ["nginx", "-g", "daemon off;"]

Go App Multi-stage Build

Dockerfile
# Build stage
FROM golang:1.21-alpine AS builder

WORKDIR /app

COPY go.mod go.sum ./
RUN go mod download

COPY . .

# Build static binary
RUN CGO_ENABLED=0 GOOS=linux go build -o /app/main .

# Final stage - scratch is empty image!
FROM scratch

# Copy binary from builder
COPY --from=builder /app/main /main

# Result: ~10MB image instead of ~1GB!
ENTRYPOINT ["/main"]

✅ Benefits of Multi-stage Builds

• Smaller image sizes (often 10x smaller!)
• No build tools in production image
• Reduced attack surface
• Faster deployments

12

Best Practices

Dockerfile Best Practices

Dockerfile (Production Ready)
# ✅ Use specific version tags, not 'latest'
FROM python:3.11.6-slim-bookworm

# ✅ Set labels for metadata
LABEL maintainer="[email protected]"
LABEL version="1.0"

# ✅ Set working directory early
WORKDIR /app

# ✅ Copy dependency files first (layer caching)
COPY requirements.txt .

# ✅ Combine RUN commands to reduce layers
# ✅ Clean up in same layer
RUN pip install --no-cache-dir -r requirements.txt \
    && rm -rf /root/.cache/pip

# ✅ Copy source code after dependencies
COPY . .

# ✅ Create non-root user for security
RUN addgroup --system --gid 1001 appgroup \
    && adduser --system --uid 1001 --gid 1001 appuser

# ✅ Change ownership of app files
RUN chown -R appuser:appgroup /app

# ✅ Switch to non-root user
USER appuser

# ✅ Document exposed port
EXPOSE 8000

# ✅ Use exec form for CMD
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "app:app"]

.dockerignore File

.dockerignore
# Git
.git
.gitignore

# Docker
Dockerfile*
docker-compose*
.docker

# Dependencies
node_modules
__pycache__
*.pyc
.venv
venv

# IDE
.vscode
.idea
*.swp
*.swo

# Build
build
dist
*.egg-info

# Logs
*.log
logs

# Environment
.env
.env.local
*.env

# Tests
tests
test
coverage
.coverage

# Documentation
docs
*.md
!README.md

Security Best Practices

🔒 Security Checklist

✅ Use official base images
✅ Run as non-root user
✅ Use specific version tags
✅ Scan images for vulnerabilities: docker scout cves myimage
✅ Don't store secrets in images
✅ Use multi-stage builds
✅ Keep images minimal (Alpine-based)
✅ Update base images regularly

Performance Tips

⚡ Optimization Tips

• Order Dockerfile commands from least to most frequently changing
• Use .dockerignore to reduce build context
• Combine RUN commands with && to reduce layers
• Use --no-cache-dir with pip
• Clean up in the same RUN command
• Use Alpine images when possible (~5MB vs ~100MB+)

🐳 Congratulations!

You've mastered Docker from basics to advanced! You're now ready to containerize any application and deploy it anywhere. Keep building!