Om Pandey
DevOps

DevOps Engineering

Bridging Development and Operations

Master the art of DevOps - from Linux fundamentals to Kubernetes orchestration. Learn CI/CD pipelines, Infrastructure as Code, containerization, and cloud technologies to become a complete DevOps Engineer!

Linux Docker Kubernetes CI/CD Cloud

DevOps Roadmap

1

What is DevOps?

DevOps is a set of practices that combines software development (Dev) and IT operations (Ops). It aims to shorten the systems development life cycle and provide continuous delivery with high software quality. DevOps is complementary to Agile software development.

Why DevOps?

Faster Delivery: Automate everything from code to production. Companies like Netflix, Amazon, and Google deploy thousands of times per day using DevOps practices!

DevOps Lifecycle (Infinity Loop)
Plan
Code
Build
Test
Deploy
Monitor

DevOps Tools Ecosystem

Linux

Operating System

Git

Version Control

Docker

Containerization

Kubernetes

Orchestration

Jenkins

CI/CD

Terraform

Infrastructure as Code

Ansible

Configuration

AWS/Azure

Cloud Platform

2

Linux Fundamentals

Linux is the backbone of DevOps. Almost all servers, containers, and cloud infrastructure run on Linux. Mastering Linux commands is essential for any DevOps engineer.

Essential Linux Commands

Terminal - File System Navigation
$ # Print working directory $ pwd /home/user $ # List files and directories $ ls -la drwxr-xr-x 5 user user 4096 Jan 25 10:00 . drwxr-xr-x 3 root root 4096 Jan 24 09:00 .. -rw-r--r-- 1 user user 220 Jan 24 09:00 .bashrc drwxr-xr-x 2 user user 4096 Jan 25 10:00 projects $ # Change directory $ cd /var/log $ # Create directory $ mkdir -p projects/devops $ # Create file $ touch file.txt $ # Copy file $ cp source.txt destination.txt $ # Move/Rename file $ mv oldname.txt newname.txt $ # Remove file $ rm file.txt $ # Remove directory recursively $ rm -rf directory/
Terminal - File Operations
$ # View file content $ cat /etc/hosts 127.0.0.1 localhost ::1 localhost $ # View with pagination $ less /var/log/syslog $ # View first 10 lines $ head -n 10 file.txt $ # View last 10 lines (follow mode) $ tail -f /var/log/nginx/access.log $ # Search in file $ grep "error" /var/log/syslog $ # Search recursively $ grep -r "TODO" ./src/ $ # Find files $ find / -name "*.log" -type f $ # Edit file with nano/vim $ nano config.yaml $ vim config.yaml

User & Permission Management

Terminal - Users & Permissions
$ # Current user $ whoami devops $ # Switch to root user $ sudo su - $ # Create new user $ sudo useradd -m -s /bin/bash newuser $ # Set password $ sudo passwd newuser $ # Add user to group $ sudo usermod -aG docker newuser $ # Change file permissions $ chmod 755 script.sh # rwxr-xr-x $ chmod +x script.sh # Add execute permission $ # Change ownership $ chown user:group file.txt $ chown -R user:group directory/

Permission Numbers

r=4, w=2, x=1
755 = Owner(rwx=7), Group(r-x=5), Others(r-x=5)
644 = Owner(rw-=6), Group(r--=4), Others(r--=4)

Process & System Management

Terminal - Process Management
$ # View running processes $ ps aux $ # Interactive process viewer $ top $ htop # Better version (install first) $ # Kill process by PID $ kill 1234 $ kill -9 1234 # Force kill $ # Kill process by name $ pkill nginx $ # System info $ uname -a Linux server 5.15.0-91-generic #101-Ubuntu SMP x86_64 GNU/Linux $ # Memory usage $ free -h total used free shared buff/cache available Mem: 15Gi 4.2Gi 8.1Gi 234Mi 3.1Gi 10Gi Swap: 2.0Gi 0B 2.0Gi $ # Disk usage $ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 100G 45G 55G 45% / $ # Directory size $ du -sh /var/log 2.3G /var/log

Package Management

Terminal - Ubuntu/Debian (apt)
$ # Update package list $ sudo apt update $ # Upgrade all packages $ sudo apt upgrade -y $ # Install package $ sudo apt install nginx docker.io -y $ # Remove package $ sudo apt remove nginx $ # Search package $ apt search docker
Terminal - CentOS/RHEL (yum/dnf)
$ # Update packages $ sudo yum update -y $ # Install package $ sudo yum install nginx -y $ # For newer versions (Fedora, RHEL 8+) $ sudo dnf install docker -y
3

Git Version Control

Git is the most widely used version control system. It tracks changes in source code during software development and enables collaboration among developers.

Git Basics

Terminal - Git Setup & Basics
$ # Configure Git $ git config --global user.name "Om Pandey" $ git config --global user.email "[email protected]" $ # Initialize repository $ git init Initialized empty Git repository in /home/user/project/.git/ $ # Clone repository $ git clone https://github.com/user/repo.git $ # Check status $ git status $ # Add files to staging $ git add file.txt $ git add . # Add all files $ # Commit changes $ git commit -m "Add new feature" $ # View commit history $ git log --oneline a1b2c3d Add new feature e4f5g6h Initial commit

Branching & Merging

Terminal - Branches
$ # List branches $ git branch * main feature-login develop $ # Create new branch $ git branch feature-auth $ # Switch to branch $ git checkout feature-auth $ git switch feature-auth # New syntax $ # Create and switch $ git checkout -b feature-new $ # Merge branch into current $ git checkout main $ git merge feature-auth $ # Delete branch $ git branch -d feature-auth

Remote Repositories

Terminal - Remote Operations
$ # Add remote $ git remote add origin https://github.com/user/repo.git $ # View remotes $ git remote -v origin https://github.com/user/repo.git (fetch) origin https://github.com/user/repo.git (push) $ # Push to remote $ git push origin main $ git push -u origin main # Set upstream $ # Pull from remote $ git pull origin main $ # Fetch changes (without merge) $ git fetch origin $ # Rebase $ git pull --rebase origin main
Git Workflow
Working Dir
Staging
Local Repo
Remote
4

Docker Containers

Docker is a platform for developing, shipping, and running applications in containers. Containers are lightweight, portable, and ensure consistency across different environments.

Containers vs VMs

Containers share the host OS kernel, making them much lighter than VMs. A container can start in seconds, while a VM takes minutes!

Install Docker

Terminal - Docker Installation
$ # Install Docker on Ubuntu $ curl -fsSL https://get.docker.com -o get-docker.sh $ sudo sh get-docker.sh $ # Add user to docker group $ sudo usermod -aG docker $USER $ # Verify installation $ docker --version Docker version 24.0.7, build afdd53b $ # Run test container $ docker run hello-world Hello from Docker!

Docker Commands

Terminal - Container Operations
$ # Pull image $ docker pull nginx:latest $ # List images $ docker images REPOSITORY TAG IMAGE ID SIZE nginx latest a6bd71f48f68 187MB python 3.11 12abc3def456 1.01GB $ # Run container $ docker run -d --name web -p 8080:80 nginx $ # Run interactive container $ docker run -it ubuntu:22.04 /bin/bash $ # List running containers $ docker ps CONTAINER ID IMAGE STATUS PORTS NAMES abc123def456 nginx Up 2 minutes 0.0.0.0:8080->80/tcp web $ # List all containers (including stopped) $ docker ps -a $ # Stop container $ docker stop web $ # Start container $ docker start web $ # Remove container $ docker rm web $ # Remove image $ docker rmi nginx:latest $ # View container logs $ docker logs web $ docker logs -f web # Follow mode $ # Execute command in container $ docker exec -it web /bin/bash

Dockerfile

A Dockerfile is a text file containing instructions to build a Docker image.

Dockerfile
Docker
# Base image
FROM python:3.11-slim

# Set working directory
WORKDIR /app

# Copy requirements first (for caching)
COPY requirements.txt .

# Install dependencies
RUN pip install --no-cache-dir -r requirements.txt

# Copy application code
COPY . .

# Expose port
EXPOSE 8000

# Environment variable
ENV PYTHONUNBUFFERED=1

# Run command
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
Terminal - Build & Run
$ # Build image $ docker build -t myapp:v1 . $ # Run container from image $ docker run -d -p 8000:8000 --name myapp myapp:v1 $ # Tag and push to registry $ docker tag myapp:v1 username/myapp:v1 $ docker push username/myapp:v1

Docker Compose

Docker Compose allows you to define and run multi-container applications.

docker-compose.yml
YAML
version: '3.8'

services:
  web:
    build: .
    ports:
      - "8000:8000"
    environment:
      - DEBUG=True
      - DATABASE_URL=postgres://postgres:password@db:5432/mydb
    depends_on:
      - db
      - redis
    volumes:
      - .:/app
    networks:
      - app-network

  db:
    image: postgres:15
    environment:
      - POSTGRES_DB=mydb
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=password
    volumes:
      - postgres_data:/var/lib/postgresql/data
    networks:
      - app-network

  redis:
    image: redis:7-alpine
    networks:
      - app-network

  nginx:
    image: nginx:latest
    ports:
      - "80:80"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf
    depends_on:
      - web
    networks:
      - app-network

volumes:
  postgres_data:

networks:
  app-network:
    driver: bridge
Terminal - Docker Compose Commands
$ # Start all services $ docker-compose up -d $ # View logs $ docker-compose logs -f web $ # Stop services $ docker-compose down $ # Rebuild and start $ docker-compose up --build -d $ # Scale services $ docker-compose up --scale web=3 -d
5

Kubernetes Orchestration

Kubernetes (K8s) is an open-source container orchestration platform that automates deployment, scaling, and management of containerized applications.

Kubernetes Architecture
kubectl
API Server
etcd
Nodes
Pods

kubectl Commands

Terminal - kubectl Basics
$ # Get cluster info $ kubectl cluster-info $ # List nodes $ kubectl get nodes NAME STATUS ROLES AGE VERSION master-node Ready control-plane 10d v1.28.0 worker-node1 Ready worker 10d v1.28.0 $ # List all pods $ kubectl get pods -A $ # List pods in namespace $ kubectl get pods -n default $ # Get all resources $ kubectl get all $ # Describe resource $ kubectl describe pod my-pod $ # View logs $ kubectl logs my-pod $ kubectl logs -f my-pod # Follow $ # Execute in pod $ kubectl exec -it my-pod -- /bin/bash $ # Apply configuration $ kubectl apply -f deployment.yaml $ # Delete resource $ kubectl delete pod my-pod $ kubectl delete -f deployment.yaml

Kubernetes Manifests

deployment.yaml
YAML
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-app
  labels:
    app: web
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - name: web
        image: nginx:latest
        ports:
        - containerPort: 80
        resources:
          limits:
            memory: "128Mi"
            cpu: "500m"
          requests:
            memory: "64Mi"
            cpu: "250m"
        env:
        - name: ENV
          value: "production"
---
apiVersion: v1
kind: Service
metadata:
  name: web-service
spec:
  type: LoadBalancer
  selector:
    app: web
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
configmap.yaml
YAML
apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
data:
  DATABASE_HOST: "postgres-service"
  DATABASE_PORT: "5432"
  CACHE_HOST: "redis-service"
---
apiVersion: v1
kind: Secret
metadata:
  name: app-secret
type: Opaque
data:
  DATABASE_PASSWORD: cGFzc3dvcmQxMjM=  # base64 encoded
  API_KEY: c2VjcmV0a2V5MTIz

Scaling & Updates

Terminal - Scaling Operations
$ # Scale deployment $ kubectl scale deployment web-app --replicas=5 $ # Autoscale based on CPU $ kubectl autoscale deployment web-app --min=2 --max=10 --cpu-percent=80 $ # Rolling update $ kubectl set image deployment/web-app web=nginx:1.25 $ # Check rollout status $ kubectl rollout status deployment/web-app $ # Rollback $ kubectl rollout undo deployment/web-app $ # View rollout history $ kubectl rollout history deployment/web-app
6

CI/CD Pipelines

CI/CD (Continuous Integration/Continuous Deployment) automates the software delivery process. CI ensures code changes are automatically tested, while CD automates deployment to production.

CI/CD Pipeline Flow
Code Push
Build
Test
Docker
Deploy

GitHub Actions

.github/workflows/ci-cd.yml
YAML
name: CI/CD Pipeline

on:
  push:
    branches: [ main, develop ]
  pull_request:
    branches: [ main ]

env:
  REGISTRY: ghcr.io
  IMAGE_NAME: ${{ github.repository }}

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Set up Python
        uses: actions/setup-python@v4
        with:
          python-version: '3.11'

      - name: Install dependencies
        run: |
          python -m pip install --upgrade pip
          pip install -r requirements.txt

      - name: Run tests
        run: |
          pytest --cov=app tests/

      - name: Lint code
        run: |
          pip install flake8
          flake8 app/

  build:
    needs: test
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Login to Container Registry
        uses: docker/login-action@v3
        with:
          registry: ${{ env.REGISTRY }}
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}

      - name: Build and push Docker image
        uses: docker/build-push-action@v5
        with:
          context: .
          push: true
          tags: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }}

  deploy:
    needs: build
    runs-on: ubuntu-latest
    if: github.ref == 'refs/heads/main'
    steps:
      - name: Deploy to Kubernetes
        uses: azure/k8s-deploy@v4
        with:
          manifests: |
            k8s/deployment.yaml
            k8s/service.yaml
          images: |
            ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }}

Jenkins Pipeline

Jenkinsfile
Groovy
pipeline {
    agent any
    
    environment {
        DOCKER_IMAGE = 'myapp'
        DOCKER_TAG = "${BUILD_NUMBER}"
    }
    
    stages {
        stage('Checkout') {
            steps {
                checkout scm
            }
        }
        
        stage('Build') {
            steps {
                sh 'pip install -r requirements.txt'
            }
        }
        
        stage('Test') {
            steps {
                sh 'pytest tests/ --junitxml=test-results.xml'
            }
            post {
                always {
                    junit 'test-results.xml'
                }
            }
        }
        
        stage('Docker Build') {
            steps {
                sh "docker build -t ${DOCKER_IMAGE}:${DOCKER_TAG} ."
            }
        }
        
        stage('Push to Registry') {
            steps {
                withCredentials([usernamePassword(
                    credentialsId: 'docker-hub',
                    usernameVariable: 'DOCKER_USER',
                    passwordVariable: 'DOCKER_PASS'
                )]) {
                    sh "docker login -u $DOCKER_USER -p $DOCKER_PASS"
                    sh "docker push ${DOCKER_IMAGE}:${DOCKER_TAG}"
                }
            }
        }
        
        stage('Deploy') {
            when {
                branch 'main'
            }
            steps {
                sh "kubectl apply -f k8s/"
                sh "kubectl set image deployment/myapp myapp=${DOCKER_IMAGE}:${DOCKER_TAG}"
            }
        }
    }
    
    post {
        success {
            slackSend channel: '#deployments',
                      color: 'good',
                      message: "Deployment successful: ${env.JOB_NAME} #${env.BUILD_NUMBER}"
        }
        failure {
            slackSend channel: '#deployments',
                      color: 'danger',
                      message: "Deployment failed: ${env.JOB_NAME} #${env.BUILD_NUMBER}"
        }
    }
}
7

Terraform IaC

Terraform is an Infrastructure as Code (IaC) tool that lets you define and provision infrastructure using declarative configuration files. It supports multiple cloud providers.

Terraform Basics

Terminal - Terraform Commands
$ # Initialize Terraform $ terraform init Terraform has been successfully initialized! $ # Format code $ terraform fmt $ # Validate configuration $ terraform validate Success! The configuration is valid. $ # Plan changes $ terraform plan $ # Apply changes $ terraform apply $ # Apply without confirmation $ terraform apply -auto-approve $ # Show current state $ terraform show $ # Destroy infrastructure $ terraform destroy

AWS Infrastructure

main.tf
HCL
# Configure AWS Provider
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }
}

provider "aws" {
  region = "us-east-1"
}

# Variables
variable "instance_type" {
  description = "EC2 instance type"
  default     = "t3.micro"
}

# VPC
resource "aws_vpc" "main" {
  cidr_block           = "10.0.0.0/16"
  enable_dns_hostnames = true
  
  tags = {
    Name = "main-vpc"
  }
}

# Subnet
resource "aws_subnet" "public" {
  vpc_id                  = aws_vpc.main.id
  cidr_block              = "10.0.1.0/24"
  map_public_ip_on_launch = true
  availability_zone       = "us-east-1a"
  
  tags = {
    Name = "public-subnet"
  }
}

# Security Group
resource "aws_security_group" "web" {
  name        = "web-sg"
  description = "Allow HTTP and SSH"
  vpc_id      = aws_vpc.main.id

  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

# EC2 Instance
resource "aws_instance" "web" {
  ami                    = "ami-0c7217cdde317cfec"
  instance_type          = var.instance_type
  subnet_id              = aws_subnet.public.id
  vpc_security_group_ids = [aws_security_group.web.id]
  
  user_data = <<-EOF
              #!/bin/bash
              apt update -y
              apt install -y nginx
              systemctl start nginx
              EOF

  tags = {
    Name = "web-server"
  }
}

# Output
output "public_ip" {
  value = aws_instance.web.public_ip
}
8

Ansible Configuration

Ansible is an open-source automation tool for configuration management, application deployment, and task automation. It uses YAML-based playbooks and is agentless.

Inventory & Playbooks

inventory.ini
INI
# Inventory file
[webservers]
web1.example.com ansible_host=192.168.1.10
web2.example.com ansible_host=192.168.1.11

[dbservers]
db1.example.com ansible_host=192.168.1.20

[all:vars]
ansible_user=ubuntu
ansible_ssh_private_key_file=~/.ssh/id_rsa
playbook.yml
YAML
---
- name: Configure Web Servers
  hosts: webservers
  become: yes
  vars:
    http_port: 80
    app_name: myapp
  
  tasks:
    - name: Update apt cache
      apt:
        update_cache: yes
        cache_valid_time: 3600

    - name: Install Nginx
      apt:
        name: nginx
        state: present

    - name: Copy Nginx config
      template:
        src: nginx.conf.j2
        dest: /etc/nginx/sites-available/{{ app_name }}
      notify: Restart Nginx

    - name: Enable site
      file:
        src: /etc/nginx/sites-available/{{ app_name }}
        dest: /etc/nginx/sites-enabled/{{ app_name }}
        state: link

    - name: Ensure Nginx is running
      service:
        name: nginx
        state: started
        enabled: yes

  handlers:
    - name: Restart Nginx
      service:
        name: nginx
        state: restarted
Terminal - Ansible Commands
$ # Ping all hosts $ ansible all -i inventory.ini -m ping $ # Run ad-hoc command $ ansible webservers -i inventory.ini -m shell -a "uptime" $ # Run playbook $ ansible-playbook -i inventory.ini playbook.yml $ # Dry run (check mode) $ ansible-playbook -i inventory.ini playbook.yml --check $ # Run with extra variables $ ansible-playbook playbook.yml -e "app_name=newapp"
9

Cloud Platforms

Cloud platforms provide on-demand computing resources. The three major providers are AWS, Azure, and Google Cloud Platform (GCP).

AWS CLI

Terminal - AWS CLI
$ # Configure AWS CLI $ aws configure AWS Access Key ID: AKIAIOSFODNN7EXAMPLE AWS Secret Access Key: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY Default region name: us-east-1 Default output format: json $ # S3 Operations $ aws s3 ls $ aws s3 mb s3://my-bucket $ aws s3 cp file.txt s3://my-bucket/ $ aws s3 sync ./folder s3://my-bucket/folder $ # EC2 Operations $ aws ec2 describe-instances $ aws ec2 start-instances --instance-ids i-1234567890abcdef0 $ aws ec2 stop-instances --instance-ids i-1234567890abcdef0 $ # EKS (Kubernetes) $ aws eks update-kubeconfig --name my-cluster --region us-east-1

Key AWS Services for DevOps

EC2: Virtual servers | S3: Object storage | EKS: Kubernetes | ECR: Container registry | Lambda: Serverless | RDS: Databases | CloudWatch: Monitoring

10

Monitoring & Logging

Monitoring and logging are essential for maintaining healthy production systems. They help detect issues, analyze performance, and troubleshoot problems.

Prometheus

Metrics Collection

Grafana

Visualization

ELK Stack

Log Management

Alertmanager

Alerting

Prometheus Configuration

prometheus.yml
YAML
global:
  scrape_interval: 15s
  evaluation_interval: 15s

alerting:
  alertmanagers:
    - static_configs:
        - targets:
          - alertmanager:9093

rule_files:
  - "alerts.yml"

scrape_configs:
  - job_name: 'prometheus'
    static_configs:
      - targets: ['localhost:9090']

  - job_name: 'node-exporter'
    static_configs:
      - targets: ['node-exporter:9100']

  - job_name: 'kubernetes-pods'
    kubernetes_sd_configs:
      - role: pod
    relabel_configs:
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
        action: keep
        regex: true
11

Networking Basics

Understanding networking is crucial for DevOps. It helps in troubleshooting, configuring services, and securing applications.

Essential Network Commands

Terminal - Network Commands
$ # Check network interfaces $ ip addr $ ifconfig $ # Test connectivity $ ping google.com $ ping -c 4 192.168.1.1 $ # DNS lookup $ nslookup google.com $ dig google.com $ # Check open ports $ netstat -tuln $ ss -tuln $ # Test port connectivity $ telnet google.com 443 $ nc -zv google.com 443 $ # Trace route $ traceroute google.com $ # HTTP requests $ curl -I https://api.example.com $ curl -X POST -d '{"key":"value"}' https://api.example.com $ # Download file $ wget https://example.com/file.zip

Common Ports

22: SSH | 80: HTTP | 443: HTTPS | 3306: MySQL | 5432: PostgreSQL | 6379: Redis | 27017: MongoDB

12

DevSecOps

DevSecOps integrates security practices into the DevOps pipeline. Security should be built into every stage of the software development lifecycle.

Security Best Practices

1

Secrets Management

Never hardcode secrets. Use tools like HashiCorp Vault, AWS Secrets Manager, or environment variables.

2

Container Security

Scan images for vulnerabilities using Trivy, Snyk, or Aqua Security. Use minimal base images.

3

Infrastructure Security

Apply principle of least privilege. Use security groups, NACLs, and firewalls properly.

4

Code Security

Use SAST tools like SonarQube, Snyk, or Checkmarx to scan code for vulnerabilities.

Terminal - Security Scanning
$ # Scan Docker image with Trivy $ trivy image myapp:latest $ # Scan Kubernetes manifests $ trivy config ./k8s/ $ # Scan code with Snyk $ snyk test --all-projects $ # Scan dependencies $ snyk monitor $ # Scan Terraform for misconfigurations $ tfsec ./terraform/ $ # Check for secrets in code $ gitleaks detect --source . $ # OWASP dependency check $ dependency-check --project myapp --scan ./

Security Checklist

1. Enable MFA on all accounts
2. Rotate credentials regularly
3. Use HTTPS everywhere
4. Keep software updated
5. Implement logging and monitoring
6. Regular security audits

Security in CI/CD Pipeline

.github/workflows/security.yml
YAML
name: Security Scan

on:
  push:
    branches: [ main ]
  pull_request:
    branches: [ main ]

jobs:
  security-scan:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Run Trivy vulnerability scanner
        uses: aquasecurity/trivy-action@master
        with:
          scan-type: 'fs'
          scan-ref: '.'
          severity: 'CRITICAL,HIGH'

      - name: Run Snyk to check for vulnerabilities
        uses: snyk/actions/node@master
        env:
          SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}

      - name: Run GitLeaks
        uses: gitleaks/gitleaks-action@v2
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

      - name: SonarCloud Scan
        uses: SonarSource/sonarcloud-github-action@master
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
          SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}

DevOps Journey Complete!

You've covered all the essential DevOps concepts! Remember, DevOps is a continuous learning journey. Keep practicing, building projects, and stay updated with the latest tools and best practices.

You're Now a DevOps Engineer!

From Linux fundamentals to Kubernetes orchestration, you've learned the complete DevOps toolkit. Now go build, automate, and deploy amazing applications!