Docker Containers: Deployment & Troubleshooting

Build, deploy, debug, and manage Docker containers — from writing your first Dockerfile to running production workloads with docker-compose.

What Is Docker?

Docker packages your application and all its dependencies into a lightweight, portable container that runs identically on any machine. Unlike virtual machines, containers share the host OS kernel, making them fast to start and efficient with resources.

Containers vs. Virtual Machines

Containers share the host kernel, start in seconds, use MBs of RAM, and are ideal for microservices and app isolation.
VMs run a full OS, take minutes to boot, use GBs of RAM, and are better for full OS-level isolation.
Docker containers are not a replacement for VMs — they serve different purposes. See our VPS & Dedicated Servers guide for VM-based hosting.

Installing Docker

# Ubuntu/Debian
sudo apt update
sudo apt install ca-certificates curl gnupg -y
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
  sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) \
  signed-by=/etc/apt/keyrings/docker.gpg] \
  https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io \
  docker-buildx-plugin docker-compose-plugin -y

# Add your user to the docker group (avoids sudo)
sudo usermod -aG docker $USER
newgrp docker

# Verify installation
docker --version
docker run hello-world

# CentOS/AlmaLinux
sudo dnf install -y dnf-plugins-core
sudo dnf config-manager --add-repo \
  https://download.docker.com/linux/centos/docker-ce.repo
sudo dnf install docker-ce docker-ce-cli containerd.io \
  docker-buildx-plugin docker-compose-plugin -y
sudo systemctl enable --now docker

Writing a Dockerfile

A Dockerfile is a recipe that tells Docker how to build your application image.

Node.js Application

# Dockerfile
FROM node:20-alpine

WORKDIR /app

# Install dependencies first (layer caching)
COPY package*.json ./
RUN npm ci --only=production

# Copy application code
COPY . .

# Don't run as root
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser

EXPOSE 3000

CMD ["node", "server.js"]

PHP Application

# Dockerfile
FROM php:8.2-apache

# Install PHP extensions
RUN docker-php-ext-install pdo pdo_mysql opcache

# Enable Apache modules
RUN a2enmod rewrite headers

# Copy application
COPY . /var/www/html/

# Set permissions
RUN chown -R www-data:www-data /var/www/html

EXPOSE 80

Python/Flask Application

# Dockerfile
FROM python:3.12-slim

WORKDIR /app

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

RUN useradd -r appuser && chown -R appuser:appuser /app
USER appuser

EXPOSE 5000

CMD ["gunicorn", "--bind", "0.0.0.0:5000", "app:app"]
Dockerfile Best Practices

Use alpine/slim base images to reduce size. Copy dependency files first (package.json, requirements.txt) before copying code — this leverages Docker's layer cache so dependencies aren't reinstalled on every code change. Never run as root in production containers. Use .dockerignore to exclude node_modules, .git, and other unnecessary files.

The .dockerignore File

# .dockerignore
node_modules
npm-debug.log
.git
.gitignore
.env
.env.*
Dockerfile
docker-compose*.yml
*.md
.vscode
.idea
__pycache__
*.pyc

Docker Commands Cheat Sheet

Images

# Build an image from Dockerfile
docker build -t myapp:latest .

# Build with a specific Dockerfile
docker build -f Dockerfile.prod -t myapp:prod .

# List images
docker images

# Remove an image
docker rmi myapp:latest

# Remove dangling (unused) images
docker image prune

# Remove ALL unused images
docker image prune -a

Containers

# Run a container
docker run -d --name myapp -p 8080:3000 myapp:latest

# Run with environment variables
docker run -d --name myapp \
  -e NODE_ENV=production \
  -e DB_HOST=db \
  -p 8080:3000 myapp:latest

# Run with a volume mount
docker run -d --name myapp \
  -v $(pwd)/data:/app/data \
  -p 8080:3000 myapp:latest

# List running containers
docker ps

# List ALL containers (including stopped)
docker ps -a

# Stop a container
docker stop myapp

# Start a stopped container
docker start myapp

# Restart a container
docker restart myapp

# Remove a container
docker rm myapp

# Remove a running container (force)
docker rm -f myapp

Executing Commands in Containers

# Open a shell inside a running container
docker exec -it myapp /bin/sh      # Alpine
docker exec -it myapp /bin/bash    # Debian/Ubuntu

# Run a one-off command
docker exec myapp ls -la /app

# Run as root (even if container runs as non-root)
docker exec -u root -it myapp /bin/sh

Docker Compose

docker-compose lets you define and run multi-container applications with a single YAML file.

Full Stack Example (Node + MySQL + Nginx)

# docker-compose.yml
services:
  app:
    build: .
    container_name: myapp
    restart: unless-stopped
    environment:
      - NODE_ENV=production
      - DB_HOST=db
      - DB_USER=appuser
      - DB_PASS=Str0ng!P@ss
      - DB_NAME=myapp_production
    depends_on:
      db:
        condition: service_healthy
    networks:
      - backend

  db:
    image: mysql:8.0
    container_name: myapp-db
    restart: unless-stopped
    environment:
      MYSQL_ROOT_PASSWORD: R00tP@ss!
      MYSQL_DATABASE: myapp_production
      MYSQL_USER: appuser
      MYSQL_PASSWORD: Str0ng!P@ss
    volumes:
      - mysql_data:/var/lib/mysql
      - ./init.sql:/docker-entrypoint-initdb.d/init.sql
    healthcheck:
      test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
      interval: 10s
      timeout: 5s
      retries: 5
    networks:
      - backend

  nginx:
    image: nginx:alpine
    container_name: myapp-nginx
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
      - ./certs:/etc/nginx/certs:ro
    depends_on:
      - app
    networks:
      - backend

volumes:
  mysql_data:

networks:
  backend:

Compose Commands

# Start all services (detached)
docker compose up -d

# Start and rebuild images
docker compose up -d --build

# Stop all services
docker compose down

# Stop and remove volumes (WARNING: deletes data)
docker compose down -v

# View logs for all services
docker compose logs

# Follow logs for a specific service
docker compose logs -f app

# Scale a service
docker compose up -d --scale app=3

# Restart a single service
docker compose restart app

# Run a one-off command in a service
docker compose exec app npm run migrate

Docker Networking

# List networks
docker network ls

# Create a custom network
docker network create mynetwork

# Run container on a specific network
docker run -d --name myapp --network mynetwork myapp:latest

# Connect a running container to a network
docker network connect mynetwork myapp

# Inspect a network (see connected containers)
docker network inspect mynetwork
Container DNS

Containers on the same Docker network can reach each other by container name. In the compose example above, the app container connects to MySQL using DB_HOST=db — Docker resolves "db" to the database container's IP automatically. Never hardcode container IPs.

Docker Volumes & Data Persistence

# Named volumes (managed by Docker, persist across restarts)
docker volume create myapp-data
docker run -d -v myapp-data:/app/data myapp:latest

# Bind mounts (map host directory into container)
docker run -d -v /host/path:/container/path myapp:latest

# List volumes
docker volume ls

# Inspect a volume (see where data is stored on host)
docker volume inspect myapp-data

# Remove unused volumes
docker volume prune

# Backup a volume
docker run --rm -v myapp-data:/data -v $(pwd):/backup \
  alpine tar czf /backup/myapp-data-backup.tar.gz -C /data .

Troubleshooting Docker

Container Won't Start

# Check container logs (most common fix)
docker logs myapp
docker logs --tail 50 myapp

# Check exit code
docker inspect myapp --format='{{.State.ExitCode}}'
# Exit 0 = normal, 1 = app error, 137 = OOM killed, 139 = segfault

# Check if the image builds correctly
docker build --no-cache -t myapp:debug .

# Run interactively to debug
docker run -it --entrypoint /bin/sh myapp:latest

Container Runs But App Doesn't Work

# Shell into the container
docker exec -it myapp /bin/sh

# Check if the process is running
ps aux

# Check if the port is listening inside the container
netstat -tlnp  # or: ss -tlnp

# Test connectivity from inside
wget -qO- http://localhost:3000/health

# Check environment variables
docker exec myapp env

# Check file permissions
docker exec myapp ls -la /app

Port Conflicts

# "port is already allocated" error
# Find what's using the port on the host
sudo lsof -i :8080
sudo ss -tlnp | grep 8080

# Kill the process or use a different port
docker run -d -p 8081:3000 myapp:latest

Out of Memory (OOM)

# Container killed with exit code 137
# Set memory limits
docker run -d --memory=512m --memory-swap=1g myapp:latest

# In docker-compose.yml
services:
  app:
    deploy:
      resources:
        limits:
          memory: 512M
        reservations:
          memory: 256M

# Check container resource usage
docker stats
docker stats myapp

Disk Space Issues

# Docker can consume a LOT of disk space over time
# Check Docker disk usage
docker system df

# Nuclear option: remove ALL unused data
docker system prune -a --volumes

# Safer: remove just dangling images and stopped containers
docker container prune
docker image prune
docker volume prune

# Check container size
docker ps -s

Build Failures

# "COPY failed: file not found" — check .dockerignore
# Make sure you're not ignoring files the build needs
cat .dockerignore

# "npm ERR! network" — DNS issue in build
# Add DNS to Docker daemon config (/etc/docker/daemon.json)
{
  "dns": ["8.8.8.8", "8.8.4.4"]
}
sudo systemctl restart docker

# Layer cache issues — force a clean build
docker build --no-cache -t myapp:latest .

# Multi-stage build to reduce image size
# Dockerfile
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

FROM node:20-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY package*.json ./
USER node
CMD ["node", "dist/server.js"]

Docker in Production

Health Checks

# In Dockerfile
HEALTHCHECK --interval=30s --timeout=10s --retries=3 \
  CMD wget -qO- http://localhost:3000/health || exit 1

# In docker-compose.yml
services:
  app:
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s

# Check health status
docker inspect --format='{{.State.Health.Status}}' myapp

Logging

# Configure log rotation (prevent disk fill)
# /etc/docker/daemon.json
{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  }
}

# Per-container log config in compose
services:
  app:
    logging:
      driver: json-file
      options:
        max-size: "10m"
        max-file: "3"

# View logs with timestamps
docker logs --timestamps myapp

# Follow logs since a specific time
docker logs --since 2h myapp

Restart Policies

# Always restart (even after reboot)
docker run -d --restart always myapp:latest

# Restart on failure only (max 5 attempts)
docker run -d --restart on-failure:5 myapp:latest

# In docker-compose.yml
services:
  app:
    restart: unless-stopped

Docker with Nginx Reverse Proxy

# nginx.conf for proxying to a Docker container
server {
    listen 80;
    server_name example.com;

    location / {
        proxy_pass http://app:3000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_cache_bypass $http_upgrade;
    }
}

Docker Security

  • Never run as root — use USER in Dockerfile
  • Use official base images — avoid random Docker Hub images
  • Don't store secrets in images — use environment variables or Docker secrets
  • Scan images for vulnerabilitiesdocker scout quickview myapp:latest
  • Pin image versions — use node:20.11-alpine not node:latest
  • Use read-only filesystems where possible — docker run --read-only
  • Limit capabilitiesdocker run --cap-drop ALL --cap-add NET_BIND_SERVICE
Pro Tip

Use docker compose watch (Compose v2.22+) for development — it automatically rebuilds and restarts containers when your source files change, similar to nodemon but for any containerized app. For production, always tag images with a version (e.g., myapp:1.2.3) rather than relying on :latest.