Moving Personal Infrastructure to Containerised Setups Link to heading

Over the past few months, I’ve been gradually migrating my personal infrastructure from traditional server setups to containerised deployments. This journey has involved everything from development tools to production services, and the transformation has been both challenging and rewarding.

The Motivation for Change Link to heading

My previous setup was a patchwork of services running directly on various machines:

  • Development databases installed locally on my laptop
  • A personal cloud server running Ubuntu with services installed via package managers
  • Various Node.js applications managed with PM2
  • Static sites deployed manually via FTP or rsync

This approach had several pain points:

  • Environment drift: Subtle differences between development and production
  • Difficult maintenance: Updates often broke things in unexpected ways
  • Hard to reproduce: Setting up a new development environment took hours
  • Resource conflicts: Different services competing for the same ports or dependencies

The Container Strategy Link to heading

I decided to containerise everything using Docker and Docker Compose, with the goal of making all services:

  • Portable: Run anywhere Docker is available
  • Reproducible: Identical environments across development and production
  • Isolated: Services can’t interfere with each other
  • Maintainable: Updates are predictable and reversible

Development Environment Transformation Link to heading

Before: Manual Setup Hell Link to heading

Setting up a new development machine previously required:

# Install Node.js
curl -fsSL https://deb.nodesource.com/setup_16.x | sudo -E bash -
sudo apt-get install -y nodejs

# Install PostgreSQL
sudo apt install postgresql postgresql-contrib
sudo -u postgres createuser --interactive

# Install Redis
sudo apt install redis-server

# Configure each service
sudo nano /etc/postgresql/14/main/postgresql.conf
sudo systemctl restart postgresql

# Pray everything works together

After: One-Command Setup Link to heading

Now, everything is defined in docker-compose.yml:

# docker-compose.dev.yml
version: "3.8"

services:
  postgres:
    image: postgres:14
    environment:
      POSTGRES_DB: devdb
      POSTGRES_USER: devuser
      POSTGRES_PASSWORD: devpass
    ports:
      - "5432:5432"
    volumes:
      - postgres_dev:/var/lib/postgresql/data
      - ./db/init:/docker-entrypoint-initdb.d
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U devuser"]
      interval: 30s
      timeout: 10s
      retries: 3

  redis:
    image: redis:7-alpine
    ports:
      - "6379:6379"
    volumes:
      - redis_dev:/data
    command: redis-server --appendonly yes

  mongodb:
    image: mongo:5
    environment:
      MONGO_INITDB_ROOT_USERNAME: devuser
      MONGO_INITDB_ROOT_PASSWORD: devpass
      MONGO_INITDB_DATABASE: devdb
    ports:
      - "27017:27017"
    volumes:
      - mongo_dev:/data/db

  elasticsearch:
    image: elasticsearch:8.3.0
    environment:
      - discovery.type=single-node
      - xpack.security.enabled=false
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ports:
      - "9200:9200"
    volumes:
      - elasticsearch_dev:/usr/share/elasticsearch/data

  adminer:
    image: adminer:4
    restart: always
    ports:
      - "8080:8080"
    depends_on:
      - postgres
      - mongodb

volumes:
  postgres_dev:
  redis_dev:
  mongo_dev:
  elasticsearch_dev:

Development setup now requires just:

git clone project-repo
cd project-repo
docker-compose -f docker-compose.dev.yml up

Personal Services Migration Link to heading

Blog and Static Sites Link to heading

Previously, I deployed static sites by manually uploading files. Now they’re containerised with Nginx:

# Dockerfile for blog
FROM node:16-alpine as builder

WORKDIR /app
COPY package*.json ./
RUN npm ci

COPY . .
RUN npm run build

FROM nginx:alpine
COPY --from=builder /app/dist /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d/default.conf

EXPOSE 80
# docker-compose.production.yml
services:
  blog:
    build: ./blog
    restart: unless-stopped
    ports:
      - "3000:80"
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.blog.rule=Host(`blog.example.com`)"
      - "traefik.http.routers.blog.tls.certresolver=letsencrypt"

API Services Link to heading

Node.js APIs that previously ran with PM2 now use standardised containers:

FROM node:16-alpine

# Create app directory
WORKDIR /usr/src/app

# Install app dependencies
COPY package*.json ./
RUN npm ci --only=production && npm cache clean --force

# Bundle app source
COPY . .

# Create non-root user
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nodejs -u 1001

USER nodejs

EXPOSE 3000

CMD ["node", "server.js"]
api:
  build: ./api
  restart: unless-stopped
  environment:
    - NODE_ENV=production
    - DATABASE_URL=postgresql://user:pass@postgres:5432/apidb
    - REDIS_URL=redis://redis:6379
  depends_on:
    postgres:
      condition: service_healthy
    redis:
      condition: service_started
  labels:
    - "traefik.enable=true"
    - "traefik.http.routers.api.rule=Host(`api.example.com`)"

Database Services Link to heading

Persistent databases with proper backup strategies:

postgres:
  image: postgres:14
  restart: unless-stopped
  environment:
    POSTGRES_DB: productiondb
    POSTGRES_USER: produser
    POSTGRES_PASSWORD_FILE: /run/secrets/postgres_password
  volumes:
    - postgres_data:/var/lib/postgresql/data
    - ./backups:/backups
  secrets:
    - postgres_password
  healthcheck:
    test: ["CMD-SHELL", "pg_isready -U produser"]
    interval: 30s
    timeout: 10s
    retries: 3

# Automated backup container
postgres_backup:
  image: postgres:14
  restart: unless-stopped
  environment:
    PGPASSWORD_FILE: /run/secrets/postgres_password
  volumes:
    - postgres_data:/var/lib/postgresql/data
    - ./backups:/backups
  secrets:
    - postgres_password
  command: |
    sh -c '
    while true; do
      pg_dump -h postgres -U produser -d productiondb > /backups/backup_$$(date +%Y%m%d_%H%M%S).sql
      find /backups -name "*.sql" -mtime +7 -delete
      sleep 86400
    done'
  depends_on:
    postgres:
      condition: service_healthy

Reverse Proxy and SSL Link to heading

Traefik handles routing, SSL certificates, and load balancing:

# docker-compose.infrastructure.yml
traefik:
  image: traefik:v2.8
  restart: unless-stopped
  command:
    - --api.dashboard=true
    - --providers.docker=true
    - --providers.docker.exposedbydefault=false
    - --entrypoints.web.address=:80
    - --entrypoints.websecure.address=:443
    - --certificatesresolvers.letsencrypt.acme.tlschallenge=true
    - --certificatesresolvers.letsencrypt.acme.email=admin@example.com
    - --certificatesresolvers.letsencrypt.acme.storage=/acme.json
  ports:
    - "80:80"
    - "443:443"
  volumes:
    - /var/run/docker.sock:/var/run/docker.sock:ro
    - ./traefik/acme.json:/acme.json
  labels:
    - "traefik.enable=true"
    - "traefik.http.routers.traefik.rule=Host(`traefik.example.com`)"
    - "traefik.http.routers.traefik.tls.certresolver=letsencrypt"
    - "traefik.http.routers.traefik.service=api@internal"
    - "traefik.http.routers.traefik.middlewares=auth"
    - "traefik.http.middlewares.auth.basicauth.users=admin:$$2y$$10$$..."

Monitoring and Observability Link to heading

Container-based monitoring stack with Prometheus, Grafana, and logging:

# monitoring/docker-compose.yml
prometheus:
  image: prom/prometheus:latest
  restart: unless-stopped
  volumes:
    - ./prometheus.yml:/etc/prometheus/prometheus.yml
    - prometheus_data:/prometheus
  command:
    - "--config.file=/etc/prometheus/prometheus.yml"
    - "--storage.tsdb.path=/prometheus"
    - "--web.console.libraries=/usr/share/prometheus/console_libraries"
    - "--web.console.templates=/usr/share/prometheus/consoles"
  labels:
    - "traefik.enable=true"
    - "traefik.http.routers.prometheus.rule=Host(`prometheus.example.com`)"

grafana:
  image: grafana/grafana:latest
  restart: unless-stopped
  environment:
    - GF_SECURITY_ADMIN_PASSWORD_FILE=/run/secrets/grafana_password
  volumes:
    - grafana_data:/var/lib/grafana
  secrets:
    - grafana_password
  labels:
    - "traefik.enable=true"
    - "traefik.http.routers.grafana.rule=Host(`grafana.example.com`)"

node_exporter:
  image: prom/node-exporter:latest
  restart: unless-stopped
  volumes:
    - /proc:/host/proc:ro
    - /sys:/host/sys:ro
    - /:/rootfs:ro
  command:
    - "--path.procfs=/host/proc"
    - "--path.sysfs=/host/sys"
    - "--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)"

loki:
  image: grafana/loki:latest
  restart: unless-stopped
  volumes:
    - ./loki-config.yml:/etc/loki/local-config.yaml
    - loki_data:/tmp/loki
  command: -config.file=/etc/loki/local-config.yaml

promtail:
  image: grafana/promtail:latest
  restart: unless-stopped
  volumes:
    - ./promtail-config.yml:/etc/promtail/config.yml
    - /var/log:/var/log:ro
    - /var/run/docker.sock:/var/run/docker.sock
  command: -config.file=/etc/promtail/config.yml

Deployment and Orchestration Link to heading

Local Development Link to heading

# Start development environment
docker-compose -f docker-compose.dev.yml up

# Run tests in isolated environment
docker-compose -f docker-compose.test.yml up --abort-on-container-exit

# Clean slate for testing
docker-compose -f docker-compose.test.yml down -v
docker-compose -f docker-compose.test.yml up --build

Production Deployment Link to heading

# Deploy to production
docker-compose -f docker-compose.yml -f docker-compose.production.yml up -d

# Rolling updates
docker-compose -f docker-compose.yml -f docker-compose.production.yml up -d --no-deps api

# View logs
docker-compose logs -f api

Backup and Recovery Link to heading

#!/bin/bash
# backup.sh - Automated backup script

# Backup volumes
docker run --rm -v postgres_data:/data -v $(pwd)/backups:/backup alpine tar czf /backup/postgres_$(date +%Y%m%d).tar.gz -C /data .

# Backup configurations
cp -r docker-compose.yml .env traefik/ ./backups/config_$(date +%Y%m%d)/

# Upload to remote storage
rclone copy ./backups/ remote:backups/

Benefits Realised Link to heading

Consistency Across Environments Link to heading

Development, staging, and production environments are now identical. Environment-specific bugs have almost disappeared.

Easy Rollbacks Link to heading

# Something went wrong? Roll back instantly
docker-compose down
git checkout previous-working-commit
docker-compose up -d

Resource Efficiency Link to heading

Containers use fewer resources than full VMs:

  • Memory: Shared kernel reduces overhead
  • Disk: Layered file system eliminates duplication
  • CPU: Near-native performance with minimal overhead

Simplified Maintenance Link to heading

Updates are now predictable and safe:

# Test update in development
docker-compose -f docker-compose.dev.yml pull
docker-compose -f docker-compose.dev.yml up -d

# If everything works, update production
docker-compose -f docker-compose.production.yml pull
docker-compose -f docker-compose.production.yml up -d

Challenges and Solutions Link to heading

Storage Management Link to heading

Docker volumes can become unwieldy:

# Regular cleanup script
docker system prune -a -f
docker volume prune -f

# Monitor disk usage
docker system df

Networking Complexity Link to heading

Container networking required learning new concepts:

# Custom networks for service isolation
networks:
  frontend:
    driver: bridge
  backend:
    driver: bridge

services:
  api:
    networks:
      - frontend
      - backend

  database:
    networks:
      - backend # Not accessible from frontend

Secret Management Link to heading

Handling sensitive configuration securely:

# Using Docker secrets
secrets:
  db_password:
    file: ./secrets/db_password.txt
  api_key:
    external: true
    external_name: api_key_secret

services:
  api:
    secrets:
      - db_password
      - api_key

Performance Considerations Link to heading

Resource Limits Link to heading

Prevent containers from consuming all system resources:

api:
  image: myapp:latest
  deploy:
    resources:
      limits:
        cpus: "1.0"
        memory: 512M
      reservations:
        cpus: "0.25"
        memory: 256M

Health Checks Link to heading

Ensure services are actually ready:

api:
  healthcheck:
    test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
    interval: 30s
    timeout: 10s
    retries: 3
    start_period: 40s

Future Improvements Link to heading

Kubernetes Migration Link to heading

For more complex scenarios, Kubernetes provides:

  • Advanced scheduling and scaling
  • Service mesh capabilities
  • More sophisticated monitoring
  • Multi-cluster deployments

GitOps Workflow Link to heading

Implementing infrastructure as code with Git-based workflows:

  • All configuration in version control
  • Automated deployment pipelines
  • Rollback capabilities through Git history
  • Audit trail for all changes

Key Takeaways Link to heading

The move to containerised infrastructure has been transformative:

  1. Reproducibility: Environments are identical and portable
  2. Maintainability: Updates are predictable and reversible
  3. Scalability: Easy to add new services or scale existing ones
  4. Monitoring: Centralised logging and metrics collection
  5. Security: Service isolation and secret management

The initial learning curve was steep, but the productivity gains and operational improvements have more than justified the effort. I can’t imagine going back to manual server management for personal infrastructure.

For anyone considering a similar migration, start small with development environments and gradually move production services as you become comfortable with the tools and patterns.


Have you containerised your personal infrastructure? What benefits and challenges have you encountered, and what tools have you found most helpful?