All posts
DevOps & Containers

Containerization, Docker, and Production Deployment: A Complete Beginner-to-CI/CD Guide

40 min
Share:
DockerContainersDevOpsCI/CDAWSKubernetes

A complete beginner-friendly deep dive into containerization, Docker internals, local development, Compose, production deployment on AWS, and zero-downtime CI/CD workflows.

You have probably experienced the classic problem: an app runs perfectly on your machine but fails somewhere else. Containerization solves this by packaging application code with its runtime, dependencies, system tools, and configuration into a portable unit called a container.

1. What Is Containerization?

Containerization is a way to run applications in isolated environments while sharing the host operating system kernel. This gives you consistency across development, testing, and production without the overhead of a full virtual machine for each app.

  • Virtual Machines virtualize hardware and include a full guest OS for each workload.
  • Containers virtualize the OS layer and package only app + dependencies.
  • Containers are usually lighter (MBs) and start much faster than VMs.
  • VMs provide stronger isolation boundaries, while containers focus on efficient process-level isolation.

Under the hood, Linux namespaces isolate what processes can see (PID, network, mount points, users), and cgroups limit what processes can consume (CPU, memory, I/O).

2. What Is Docker?

Docker made container workflows mainstream by standardizing image builds, distribution, and runtime management with a developer-friendly CLI and ecosystem.

Docker Brand vs Docker Technology

  • Docker, Inc. is the company.
  • Docker Engine is the open-source runtime that builds and runs containers.
  • Docker Desktop bundles Engine, CLI, Compose, and local tooling for macOS/Windows/Linux.
  • Docker Hub is a registry for storing and sharing images.
  • Docker CLI is the docker command-line client.

Docker Engine Architecture

1YOU (Terminal)
2  -> Docker Client (docker CLI)
3    -> Docker Daemon (dockerd)
4      -> containerd
5        -> runc
6          -> Container Process

When you run docker run, the CLI talks to dockerd via API, dockerd delegates lifecycle operations to containerd, and runc creates the isolated process according to OCI runtime specs.

3. Where Can You Run Docker?

Linux (Native)

1sudo apt-get update
2sudo apt-get install docker-ce docker-ce-cli containerd.io
3docker --version
4docker run hello-world

Linux is the most direct environment because containers share the host Linux kernel with no extra virtualization layer.

macOS and Windows

macOS and Windows require a Linux VM layer because containers are Linux-native. On Windows, Docker Desktop with WSL 2 backend is typically the best balance of performance and developer experience.

1# PowerShell (Admin)
2wsl --install
3wsl --install -d Ubuntu
4# Then verify in WSL
5docker run hello-world

4. Background Knowledge Before Docker

  • Linux basics: navigation, permissions, process management, package tools.
  • Networking: IPs, ports, DNS, localhost, TCP/UDP basics.
  • Client-server model: requests, responses, listening services.
  • YAML syntax: indentation, maps, arrays, quoting.
1services:
2  web:
3    image: nginx
4    ports:
5      - "8080:80"
6    environment:
7      - APP_ENV=production

5. Docker Terminology You Must Know

Image

An image is a read-only template used to create containers. Images are layered and cached, which makes rebuilds faster when unchanged layers are reused.

1docker pull python:3.11-slim
2docker images
3docker rmi python:3.11-slim

Container

A container is a running instance of an image. By default it is ephemeral, so data inside it disappears when removed unless persisted externally.

1docker run -d --name my-app python:3.11-slim sleep 3600
2docker ps
3docker ps -a
4docker stop my-app
5docker rm my-app

Dockerfile

1FROM python:3.11-slim
2WORKDIR /app
3COPY requirements.txt .
4RUN pip install --no-cache-dir -r requirements.txt
5COPY . .
6EXPOSE 8000
7CMD ["python", "server.py"]

Daemon, Socket, and CLI

dockerd is the background service. The CLI talks to it through the Docker socket (commonly /var/run/docker.sock on Linux). Be careful exposing or mounting this socket; it effectively grants host-level Docker control.

Registry

A registry stores and distributes images. Docker Hub is default; alternatives include ECR and GHCR.

1docker tag my-app:latest myusername/my-app:v1.0
2docker push myusername/my-app:v1.0
3docker pull myusername/my-app:v1.0

Logs, Metrics, Interactive Shell

1docker stats
2docker logs my-app
3docker logs -f my-app
4docker exec -it my-app /bin/bash

Ports

1docker run -d -p 8080:80 nginx
2# localhost:8080 -> container:80

Volumes

1docker volume create my-data
2docker run -d -v my-data:/app/data my-app
3docker volume ls
4docker volume inspect my-data
5docker volume prune

Use named volumes for durable application data, bind mounts for local development, and tmpfs for memory-backed temporary data.

Networking

1docker network create my-network
2docker run -d --name api --network my-network my-api
3docker run -d --name db --network my-network postgres:15

Custom bridge networks provide built-in DNS resolution between containers by name, which simplifies service-to-service communication.

Restarting and Rebuilding

1docker build -t my-app:v2 .
2docker stop my-app
3docker rm my-app
4docker run -d --name my-app -p 8080:8000 my-app:v2

6. Hands-On Project: Python Task Manager API

To tie concepts together, build a small Flask API and containerize it end-to-end.

1task-manager/
2  server.py
3  requirements.txt
4  Dockerfile
5  docker-compose.yml
6  .env
7  .dockerignore
1import os
2from datetime import datetime
3from flask import Flask, request, jsonify
4
5app = Flask(__name__)
6tasks = []
7task_id_counter = 1
8
9@app.get('/health')
10def health():
11    return jsonify({
12        'status': 'healthy',
13        'timestamp': datetime.utcnow().isoformat(),
14        'environment': os.getenv('APP_ENV', 'development')
15    })
16
17@app.get('/tasks')
18def get_tasks():
19    return jsonify({'tasks': tasks, 'count': len(tasks)})
20
21@app.post('/tasks')
22def create_task():
23    global task_id_counter
24    data = request.get_json() or {}
25    if 'title' not in data:
26        return jsonify({'error': 'Title is required'}), 400
27    task = {
28        'id': task_id_counter,
29        'title': data['title'],
30        'description': data.get('description', ''),
31        'completed': False,
32        'created_at': datetime.utcnow().isoformat()
33    }
34    tasks.append(task)
35    task_id_counter += 1
36    return jsonify(task), 201
37
38if __name__ == '__main__':
39    app.run(host='0.0.0.0', port=int(os.getenv('PORT', '8000')))
1flask==3.0.0
2gunicorn==21.2.0
1FROM python:3.11-slim
2ENV PYTHONDONTWRITEBYTECODE=1 PYTHONUNBUFFERED=1
3WORKDIR /app
4COPY requirements.txt .
5RUN pip install --no-cache-dir -r requirements.txt
6COPY . .
7EXPOSE 8000
8HEALTHCHECK --interval=30s --timeout=10s --retries=3 CMD curl -f http://localhost:8000/health || exit 1
9CMD ["gunicorn", "--bind", "0.0.0.0:8000", "--workers", "4", "server:app"]
1docker build -t task-manager:v1 .
2docker run -d --name task-api -p 8080:8000 -e APP_ENV=development --restart=unless-stopped task-manager:v1
3docker logs -f task-api
4curl http://localhost:8080/health

7. Docker Compose for Multi-Container Apps

Compose makes multi-service stacks declarative and repeatable. Instead of long docker run commands, define services in one YAML file.

1version: "3.9"
2
3services:
4  api:
5    build: .
6    ports:
7      - "8080:8000"
8    environment:
9      - APP_ENV=development
10      - DATABASE_URL=postgresql://admin:secret@db:5432/tasks
11    depends_on:
12      - db
13      - cache
14    restart: unless-stopped
15
16  db:
17    image: postgres:15-alpine
18    environment:
19      POSTGRES_DB: tasks
20      POSTGRES_USER: admin
21      POSTGRES_PASSWORD: secret
22    volumes:
23      - pgdata:/var/lib/postgresql/data
24
25  cache:
26    image: redis:7-alpine
27
28volumes:
29  pgdata:
1docker compose up -d --build
2docker compose ps
3docker compose logs -f
4docker compose down

8. Local Development Workflow

  • Install Docker Desktop and verify docker --version and docker compose version.
  • Use WSL 2 backend on Windows for near-native Linux behavior.
  • Use .env files or --env-file for environment variables.
  • Avoid baking secrets into Docker images.
1docker build --no-cache -t my-app .
2docker build --build-arg VERSION=2.0 -t my-app .
3docker buildx build --platform linux/amd64,linux/arm64 -t my-app .

9. Production Deployment on AWS

Option A: EC2 (Simple and Manual)

1ssh -i my-key.pem ec2-user@<your-ec2-ip>
2sudo yum update -y
3sudo yum install -y docker
4sudo systemctl enable --now docker
5git clone https://github.com/youruser/task-manager.git
6cd task-manager
7docker compose up -d --build

Option B: ECS Fargate (Managed Containers)

1aws ecr create-repository --repository-name task-manager
2aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin <account-id>.dkr.ecr.us-east-1.amazonaws.com
3docker tag task-manager:v1 <account-id>.dkr.ecr.us-east-1.amazonaws.com/task-manager:v1
4docker push <account-id>.dkr.ecr.us-east-1.amazonaws.com/task-manager:v1
1{
2  "family": "task-manager",
3  "networkMode": "awsvpc",
4  "requiresCompatibilities": ["FARGATE"],
5  "cpu": "256",
6  "memory": "512",
7  "containerDefinitions": [
8    {
9      "name": "task-api",
10      "image": "<account-id>.dkr.ecr.us-east-1.amazonaws.com/task-manager:v1",
11      "portMappings": [{ "containerPort": 8000 }],
12      "healthCheck": {
13        "command": ["CMD-SHELL", "curl -f http://localhost:8000/health || exit 1"]
14      }
15    }
16  ]
17}

Option C: EKS (Kubernetes)

1apiVersion: apps/v1
2kind: Deployment
3metadata:
4  name: task-api
5spec:
6  replicas: 3
7  selector:
8    matchLabels:
9      app: task-api
10  template:
11    metadata:
12      labels:
13        app: task-api
14    spec:
15      containers:
16        - name: task-api
17          image: <account-id>.dkr.ecr.us-east-1.amazonaws.com/task-manager:v1
18          ports:
19            - containerPort: 8000
20---
21apiVersion: v1
22kind: Service
23metadata:
24  name: task-api-service
25spec:
26  type: LoadBalancer
27  selector:
28    app: task-api
29  ports:
30    - port: 80
31      targetPort: 8000

10. CI/CD with GitHub Actions

A robust CI/CD pipeline runs tests, builds and pushes images, and deploys through rolling updates so users never hit downtime.

1name: Build and Deploy to ECS
2on:
3  push:
4    branches: [main]
5
6jobs:
7  test:
8    runs-on: ubuntu-latest
9    steps:
10      - uses: actions/checkout@v4
11      - uses: actions/setup-python@v5
12        with:
13          python-version: "3.11"
14      - run: pip install -r requirements.txt
15      - run: pytest tests/ -v
16
17  build:
18    needs: test
19    runs-on: ubuntu-latest
20    steps:
21      - uses: actions/checkout@v4
22      - uses: aws-actions/configure-aws-credentials@v4
23      - uses: aws-actions/amazon-ecr-login@v2
24      - run: |
25          docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$GITHUB_SHA .
26          docker push $ECR_REGISTRY/$ECR_REPOSITORY:$GITHUB_SHA
27
28  deploy:
29    needs: build
30    runs-on: ubuntu-latest
31    steps:
32      - uses: aws-actions/configure-aws-credentials@v4
33      - uses: aws-actions/amazon-ecs-deploy-task-definition@v2

11. Security and Hardening

  • Run containers as non-root users.
  • Scan images for CVEs (for example, Trivy) as part of CI.
  • Do not hardcode secrets in Dockerfiles or source control.
  • Prefer read-only filesystems and drop unnecessary Linux capabilities.
  • Use multi-stage builds to keep production images minimal.
1FROM python:3.11-slim
2RUN groupadd -r appuser && useradd -r -g appuser appuser
3WORKDIR /app
4COPY . .
5RUN pip install --no-cache-dir -r requirements.txt
6USER appuser
7CMD ["python", "server.py"]

12. Quick Command Reference

  • Build: docker build -t name .
  • Run: docker run -d --name n image
  • Logs: docker logs -f name
  • Exec shell: docker exec -it name bash
  • Compose up: docker compose up -d --build
  • Compose down: docker compose down
  • Cleanup: docker system prune -a

Conclusion

Containerization gives you reproducible environments, Docker gives you ergonomic tooling, and CI/CD turns deployments into a safe repeatable process. Start with one service, containerize it well, add Compose, then automate your release pipeline. Build, break, iterate, and your deployment confidence will grow fast.