Docker in Production on a VPS: What Breaks and How to Fix It

11 min read·Matthieu·ProductionSecurityVPSDocker ComposeDocker|

Docker works on your laptop. On a public VPS, it bypasses firewalls, fills disks with logs, runs everything as root, and has no update strategy. Here are the 8 problems you need to solve.

You installed Docker on your VPS. You ran docker compose up. Your app is live. Done, right?

Not quite. On your laptop, nobody is scanning your ports, disk space is plentiful, and security barely matters. On a VPS facing the internet, Docker's defaults actively work against you.

This page covers the 8 problems you will hit and links to a dedicated guide for each one. Scan it, find what applies to your server, follow the deep dive.

Prerequisites

This guide assumes you already know Docker basics. If you need to get up to speed:

This article targets Debian 12 and Ubuntu 24.04 with Docker Engine 29.x and Compose v2 (version 5.x).

What breaks when you run Docker on a public VPS?

Eight things break when you move Docker from development to a public VPS: your firewall stops working, containers run as root with full host access, logs fill your disk with no rotation, container networking conflicts with host networking, services have no resource limits or health checks, volumes have no backup strategy, ports are exposed without TLS, and container images go stale with no update plan. These are all Docker defaults. Every one of them will bite you in production.

Quick overview before the deep dives:

# Problem Diagnostic command Risk
1 Firewall bypass sudo iptables -L DOCKER-USER -n Critical
2 Root containers docker info --format '{{.SecurityOptions}}' Critical
3 Disk full from logs du -sh /var/lib/docker/containers/*/*-json.log High
4 Network conflicts docker network ls Medium
5 No resource limits docker stats --no-stream High
6 No volume backups docker volume ls High
7 No reverse proxy / TLS ss -tlnp | grep -E ':80|:443' Critical
8 Stale images docker compose images Medium

Run these commands on your server right now. If any output surprises you, read the corresponding section below.

Does Docker bypass your firewall?

Yes. Docker manipulates iptables directly, inserting rules in the nat and filter tables that are evaluated before UFW or firewalld ever see the traffic. When you publish a port with -p 8080:80, that port is open to the entire internet, even if your UFW rules say deny incoming.

Check if this affects you:

sudo iptables -L DOCKER-USER -n -v

If the output shows an empty chain or only an unconditional RETURN rule with no filtering, every published port is wide open.

The root cause: packets destined for Docker containers go through the FORWARD chain and the DOCKER chain. UFW operates on the INPUT chain. The two never meet.

The simplest immediate fix is to bind published ports to 127.0.0.1 only:

ports:
  - "127.0.0.1:8080:80"

This makes the port accessible only from the host itself. Pair it with a reverse proxy to handle external traffic.

For a complete walkthrough of the DOCKER-USER chain fix, UFW integration, and nftables configuration: Fix Docker Bypassing UFW: 4 Tested Solutions for Your VPS

Is running Docker as root dangerous on a VPS?

By default, the Docker daemon runs as root, and containers run as root inside their namespaces. If an attacker escapes a container, they land on the host as root. On a public VPS, this is a direct path to full server compromise.

Check your current setup:

docker info --format '{{.SecurityOptions}}'

Look for name=rootless in the output. If it is not there, your daemon runs as root.

Also check if your containers run as root internally:

docker ps -q | xargs -I {} docker exec {} id

If you see uid=0(root) for most containers, they are running as root inside the container. An image that sets USER appuser in its Dockerfile will show a non-root uid instead.

You have three layers of defense:

  1. Rootless mode runs the entire Docker daemon under a non-root user. Even a container escape lands as an unprivileged user on the host.
  2. Seccomp profiles restrict which syscalls containers can use. Docker ships a default profile that blocks ~44 dangerous syscalls, but you can tighten it further.
  3. AppArmor / SELinux adds mandatory access control on top of everything else.

Full setup for rootless Docker, custom seccomp profiles, and no-new-privileges: Docker Security Hardening: Rootless Mode, Seccomp, AppArmor on a VPS

Why do Docker containers fill your disk?

Docker's default log driver is json-file with no size limit and no rotation. Every line your application writes to stdout gets stored in /var/lib/docker/containers/<id>/<id>-json.log. A busy web app can generate gigabytes of logs in days.

Check how much space logs are using right now:

sudo du -sh /var/lib/docker/containers/*/*-json.log 2>/dev/null | sort -rh | head -5

On a VPS that has been running for a few weeks without log rotation, you might see output like:

4.2G    /var/lib/docker/containers/a1b2c3.../a1b2c3...-json.log
1.8G    /var/lib/docker/containers/d4e5f6.../d4e5f6...-json.log
256M    /var/lib/docker/containers/g7h8i9.../g7h8i9...-json.log

Logs are not the only disk consumer. Check overall Docker disk usage:

docker system df

Typical output on a server running 5 services:

TYPE            TOTAL     ACTIVE    SIZE      RECLAIMABLE
Images          12        5         4.2GB     2.8GB (66%)
Containers      8         5         52MB      12MB (23%)
Local Volumes   6         4         1.1GB     245MB (22%)
Build Cache     18        0         890MB     890MB (100%)

That build cache is 100% reclaimable. Those 7 unused images take 2.8 GB. On a 40 GB VPS disk, this adds up fast.

The fix is two-part: configure log rotation in /etc/docker/daemon.json and set up periodic cleanup of unused images and build cache.

Complete log rotation configuration, disk monitoring, and automated cleanup:

How does Docker networking work on a single VPS?

Docker creates its own bridge network (172.17.0.0/16 by default) and manages container IPs internally. On a VPS, this can conflict with your host network, your VPN subnets, or other services.

See what networks Docker has created:

docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
a1b2c3d4e5f6   bridge    bridge    local
f6e5d4c3b2a1   host      host      local
9i8h7g6f5e4d   none      null      local

Every docker compose up creates an additional network. After a few projects, you might have a dozen networks with overlapping subnets.

Key decisions for VPS networking:

  • Bridge mode (default): containers get internal IPs, ports are published via iptables. Works for most setups. Adds a NAT layer.
  • Host mode: containers share the host's network stack directly. No NAT overhead, but no port isolation. Useful for performance-sensitive services.
  • Custom bridge networks: containers on the same custom network can reach each other by container name. This is what Compose creates automatically.

The important thing on a VPS: define your subnets explicitly in docker-compose.yml instead of letting Docker pick. This avoids conflicts with your hosting provider's internal networks.

Deep dive on bridge vs host mode, custom subnets, and inter-container DNS:

What happens without resource limits and health checks?

Without memory limits, a single container with a memory leak will consume all available RAM, trigger the Linux OOM killer, and take down unrelated containers or even the SSH daemon. Without health checks, Docker has no way to know a container is broken. It stays "running" while serving errors.

Check if any of your containers have limits set:

docker stats --no-stream --format "table {{.Name}}\t{{.MemUsage}}\t{{.MemPerc}}\t{{.CPUPerc}}"

If the MEM % column shows values but you never set limits, those percentages are against total host RAM. A container showing 45% is using 45% of your entire VPS memory, and nothing stops it from taking more.

The minimum production configuration in your docker-compose.yml:

services:
  app:
    deploy:
      resources:
        limits:
          memory: 512M
          cpus: "1.0"
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
      interval: 30s
      timeout: 5s
      retries: 3
      start_period: 10s
    restart: unless-stopped

The restart: unless-stopped policy restarts crashed containers automatically but respects manual stops. The healthcheck lets Docker mark unhealthy containers and restart them based on the retry count.

Full guide on Compose resource limits, healthcheck patterns, and restart policies:

How do you back up Docker volumes?

Docker volumes persist data outside of containers. If a volume is deleted or corrupted, your database, uploaded files, or configuration is gone. Docker has no built-in backup mechanism for volumes.

List your volumes and their sizes:

docker system df -v | grep -A 100 "Local Volumes space usage"

Or check individual volume mount points:

docker volume ls -q | xargs -I {} docker volume inspect {} --format '{{.Name}}: {{.Mountpoint}}'

The mount points are under /var/lib/docker/volumes/ by default. You can back them up with standard Linux tools, but you need to stop or pause the container first to avoid inconsistent snapshots. For databases, a filesystem copy of a running database is not a reliable backup. You need a dump first.

Backup strategies, restore procedures, and automated scheduling:

How do you expose Docker services with TLS?

Publishing ports directly with -p 443:443 on each container does not scale past one service. You need a reverse proxy that terminates TLS, routes traffic to the right container, and handles certificate renewal.

Check what is currently listening on your VPS:

ss -tlnp | grep -E ':80|:443'

If you see your application containers binding directly to 80 or 443, they are exposed without a proxy layer. This means each service needs its own certificate management, and you cannot host multiple domains on one VPS.

Three reverse proxy options dominate the Docker ecosystem:

Proxy Auto-discovery Auto TLS Config style
Traefik Yes (Docker labels) Let's Encrypt built-in Labels + YAML
Caddy Via plugin Automatic by default Caddyfile
Nginx Proxy Manager Docker socket Let's Encrypt UI Web GUI

Traefik is the most common choice for Docker-native setups because it reads container labels and generates routing rules automatically. No config files to update when you add a service.

Comparison, setup guides, and TLS configuration for each option:

What is the update strategy for Docker containers on a VPS?

Docker images do not update themselves. If you run postgres:16 and a security patch ships in postgres:16.4, your container keeps running the old image until you explicitly pull and recreate it.

Check what images your services are running:

docker compose images

Compare the image digests against the latest available:

docker compose pull --dry-run 2>&1

If any image shows as "pulled" in the dry-run output, your running version is outdated.

The key decisions:

  • Pin image tags: never use :latest in production. Use specific version tags like postgres:16.4 or even SHA digests.
  • Scheduled pulls: run docker compose pull && docker compose up -d on a schedule, but test updates in staging first.
  • Zero-downtime restarts: use health checks and docker compose up -d --no-deps <service> to update one service at a time.
  • Image scanning: tools like Trivy scan your images for known CVEs before deployment.

Update strategies, image pinning patterns, and zero-downtime deployment:

Do you need Kubernetes to run Docker in production?

No. Docker Compose is a production-ready deployment tool for single-server workloads. Kubernetes solves multi-node orchestration, automatic scaling, and self-healing across a cluster. If your application runs on one VPS, Compose gives you everything you need without the operational overhead of Kubernetes.

Where Compose works:

  • Single VPS running 3-15 containers
  • Static or predictable traffic patterns
  • Small team without dedicated platform engineers
  • Services that can tolerate seconds of downtime during updates

Where you outgrow Compose:

  • Multiple servers needed for high availability
  • Auto-scaling based on traffic spikes
  • Complex service meshes with hundreds of microservices
  • Zero-downtime requirements with blue-green deployments across nodes

The lightweight middle ground is K3s, a stripped-down Kubernetes that runs on a single VPS with about 512 MB of RAM overhead. But for most side projects, SaaS MVPs, and small production apps, Compose is the right tool.

How much RAM does a VPS need for Docker in production?

A VPS running Docker in production needs at minimum 4 GB of RAM for 3-5 containers. The Docker daemon itself uses approximately 100-200 MB. Each container adds its own memory footprint on top of that. Without memory limits set, a single misbehaving container can consume everything.

VPS size Containers Good for
4 GB RAM 3-5 lightweight Blog, API, database, Redis
8 GB RAM 5-10 mixed SaaS MVP, multiple services, monitoring stack
16 GB RAM 10-20 Multiple projects, CI runners, heavier databases
32 GB RAM 20+ AI inference, large databases, build servers

Storage matters as much as RAM. Docker images, volumes, logs, and build cache accumulate. Plan for at least 40 GB of NVMe storage, and set up the monitoring described in from day one.

CPU is rarely the bottleneck for containerized web services. 4 vCPUs handle most workloads. Exceptions: video transcoding, AI inference, and build servers.

For a VPS with NVMe storage sized for Docker production workloads, see Virtua Cloud VPS plans.

Production checklist

Before going live with Docker on a VPS, verify every item:

  • Firewall rules block all Docker-published ports except through reverse proxy
  • Containers run as non-root users where possible
  • Docker daemon runs in rootless mode or containers use seccomp + no-new-privileges
  • Log rotation is configured in /etc/docker/daemon.json
  • docker system prune runs on a schedule (weekly cron)
  • Every service has memory and CPU limits in docker-compose.yml
  • Every service has a health check
  • Restart policy is set (unless-stopped or on-failure)
  • Volumes with important data have automated backups
  • A reverse proxy handles TLS termination and certificate renewal
  • Image tags are pinned to specific versions, not :latest
  • You have a tested update procedure for pulling new images
  • journalctl -u docker and docker logs <container> work and you know where to look

If you cannot check every box, work through the articles linked in each section above.


Copyright 2026 Virtua.Cloud. All rights reserved. This content is original work by the Virtua.Cloud team. Reproduction, republication, or redistribution without written permission is prohibited.

Ready to try it yourself?

Run Docker in production on a reliable VPS.

See VPS Plans