Traefik vs Caddy vs Nginx: Docker Reverse Proxy Compared
Three working Docker Compose stacks for Traefik, Caddy, and Nginx as reverse proxies on a VPS. Same backend, real benchmarks, and a decision framework to pick the right one.
You have Docker containers running on a VPS. You need HTTPS, routing by domain name, and a single entry point. Traefik, Caddy, and Nginx all solve this problem. They solve it differently.
This article gives you three working Docker Compose stacks that deploy the same backend behind each proxy. Copy the one that fits your situation. The comparison table and decision framework at the end help you pick.
All examples use a dedicated proxy network, HTTP-to-HTTPS redirect, and production-grade defaults. The stacks target Ubuntu 24.04 on a Virtua Cloud VPS.
What does a reverse proxy do for Docker containers on a VPS?
A reverse proxy sits between the internet and your Docker containers. It terminates TLS (HTTPS), routes requests to the right container based on the hostname, and exposes a single port pair (80/443) instead of one port per service. This means your containers never handle certificates or bind to public ports directly.
Without a reverse proxy, each container would need its own public port. Visitors would access example.com:3000 for one service and example.com:8080 for another. A reverse proxy lets you use app.example.com and api.example.com on standard ports instead.
All three stacks below assume:
- Docker and Docker Compose are installed Docker in Production on a VPS: What Breaks and How to Fix It
- A DNS A record points your domain to the VPS IP
- Ports 80 and 443 are open in your firewall Fix Docker Bypassing UFW: 4 Tested Solutions for Your VPS
- You have SSH access as a non-root user with sudo
Every example deploys the same backend: the traefik/whoami image, which returns HTTP headers and container info. Replace it with your real application later.
How do you set up Traefik as a Docker reverse proxy with automatic HTTPS?
Traefik discovers containers automatically by reading Docker labels. You add routing rules as labels on each service. When a container starts, Traefik detects it, requests a Let's Encrypt certificate, and begins routing traffic. No config file reload needed.
Create the project directory:
mkdir -p ~/traefik-proxy && cd ~/traefik-proxy
Create the Docker network that all proxied services will share:
docker network create proxy
Create docker-compose.yml:
services:
traefik:
image: traefik:v3.6
container_name: traefik
restart: unless-stopped
command:
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--providers.docker.network=proxy"
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
- "--entrypoints.web.http.redirections.entrypoint.to=websecure"
- "--entrypoints.web.http.redirections.entrypoint.scheme=https"
- "--certificatesresolvers.letsencrypt.acme.email=you@example.com"
- "--certificatesresolvers.letsencrypt.acme.storage=/acme.json"
- "--certificatesresolvers.letsencrypt.acme.tlschallenge=true"
- "--log.level=WARN"
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./acme.json:/acme.json
networks:
- proxy
security_opt:
- no-new-privileges:true
whoami:
image: traefik/whoami
container_name: whoami
restart: unless-stopped
labels:
- "traefik.enable=true"
- "traefik.http.routers.whoami.rule=Host(`app.example.com`)"
- "traefik.http.routers.whoami.entrypoints=websecure"
- "traefik.http.routers.whoami.tls.certresolver=letsencrypt"
networks:
- proxy
networks:
proxy:
external: true
Before starting, create the certificate storage file with restricted permissions:
touch acme.json && chmod 600 acme.json
Traefik refuses to start if acme.json has open permissions. The 600 ensures only the owner can read the private keys stored inside.
Start the stack:
docker compose up -d
Check that both containers are running:
docker compose ps
Both traefik and whoami should show status Up. Now test from your local machine (not the server):
curl https://app.example.com
The response contains the whoami output with request headers. The X-Forwarded-For header in the response tells you Traefik is proxying traffic and terminating TLS.
What the labels do:
traefik.enable=trueopts this container in (since we setexposedbydefault=false)traefik.http.routers.whoami.rule=Host(...)matches requests by hostnametraefik.http.routers.whoami.tls.certresolver=letsencrypttells Traefik to obtain a certificate for this domain
To add another service, add it to any Compose file on the same proxy network with the right labels. Traefik picks it up automatically.
Is it safe to mount the Docker socket in Traefik?
Mounting /var/run/docker.sock gives Traefik full access to the Docker API. If an attacker compromises Traefik, they can create containers, read environment variables (including secrets), and escalate to root on the host. The :ro flag only prevents writes at the filesystem level. It does not restrict Docker API calls.
For production, use a Docker socket proxy. This sits between Traefik and the Docker daemon, filtering API calls to allow only read operations on container metadata.
Add this to your docker-compose.yml:
services:
socket-proxy:
image: tecnativa/docker-socket-proxy:0.4
container_name: socket-proxy
restart: unless-stopped
environment:
CONTAINERS: 1
NETWORKS: 1
SERVICES: 0
TASKS: 0
POST: 0
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
networks:
- socket-proxy
security_opt:
- no-new-privileges:true
traefik:
image: traefik:v3.6
container_name: traefik
restart: unless-stopped
depends_on:
- socket-proxy
command:
- "--providers.docker.endpoint=tcp://socket-proxy:2375"
- "--providers.docker.exposedbydefault=false"
- "--providers.docker.network=proxy"
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
- "--entrypoints.web.http.redirections.entrypoint.to=websecure"
- "--entrypoints.web.http.redirections.entrypoint.scheme=https"
- "--certificatesresolvers.letsencrypt.acme.email=you@example.com"
- "--certificatesresolvers.letsencrypt.acme.storage=/acme.json"
- "--certificatesresolvers.letsencrypt.acme.tlschallenge=true"
- "--log.level=WARN"
ports:
- "80:80"
- "443:443"
volumes:
- ./acme.json:/acme.json
networks:
- proxy
- socket-proxy
security_opt:
- no-new-privileges:true
networks:
proxy:
external: true
socket-proxy:
driver: bridge
internal: true
Notice Traefik no longer mounts the Docker socket directly. The socket-proxy network is internal: true, meaning it has no outbound internet access. The socket proxy only allows GET requests to the containers and networks endpoints.
How do you set up Caddy as a Docker reverse proxy with automatic HTTPS?
Caddy handles HTTPS automatically with zero configuration beyond the domain name. Point a domain at your server, put it in the Caddyfile, and Caddy obtains and renews certificates from Let's Encrypt. No resolver config, no ACME settings. It is the shortest path to HTTPS for a Docker reverse proxy.
Create the project directory:
mkdir -p ~/caddy-proxy && cd ~/caddy-proxy
Create the shared proxy network (skip if you already created it for Traefik):
docker network create proxy
Create Caddyfile:
app.example.com {
reverse_proxy whoami:80
encode gzip
header {
Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
X-Content-Type-Options "nosniff"
X-Frame-Options "DENY"
Referrer-Policy "strict-origin-when-cross-origin"
}
}
That's the entire proxy configuration. Caddy reads the domain name, requests a certificate, and proxies to the whoami container on port 80. No certificate resolver, no ACME email (Caddy uses your machine's default, or you can set it globally), no storage paths to manage.
Create docker-compose.yml:
services:
caddy:
image: caddy:2.11
container_name: caddy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
- "443:443/udp"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- caddy_data:/data
- caddy_config:/config
networks:
- proxy
cap_drop:
- ALL
cap_add:
- NET_BIND_SERVICE
whoami:
image: traefik/whoami
container_name: whoami
restart: unless-stopped
networks:
- proxy
networks:
proxy:
external: true
volumes:
caddy_data:
caddy_config:
The 443:443/udp port enables HTTP/3 (QUIC), which Caddy supports out of the box. The cap_drop: ALL with cap_add: NET_BIND_SERVICE drops all Linux capabilities except the one needed to bind ports below 1024.
Start the stack:
docker compose up -d
Check container status:
docker compose ps
Both containers should show Up. Test from your local machine with verbose output:
curl -v https://app.example.com
Look for HTTP/2 200 in the output. You should also see the security headers from the Caddyfile (Strict-Transport-Security, X-Content-Type-Options, etc.).
To add another service, add a new block in the Caddyfile with the domain and reverse_proxy directive, then reload:
docker compose exec caddy caddy reload --config /etc/caddy/Caddyfile
No container restart needed. Caddy does not need the Docker socket. It does not auto-discover containers. You manage routing in the Caddyfile.
How do you set up Nginx as a Docker reverse proxy with Let's Encrypt?
Nginx gives you full control over every proxy directive, header, buffer size, and cache rule. The tradeoff is manual configuration. Nginx does not obtain TLS certificates on its own. You pair it with Certbot, which handles ACME challenges and certificate renewal.
Create the project directory:
mkdir -p ~/nginx-proxy && cd ~/nginx-proxy
Create the shared proxy network:
docker network create proxy
Create the Nginx configuration at nginx/conf.d/app.conf:
mkdir -p nginx/conf.d
server {
listen 80;
server_name app.example.com;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
http2 on;
server_name app.example.com;
ssl_certificate /etc/letsencrypt/live/app.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/app.example.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers off;
server_tokens off;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-Frame-Options "DENY" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
location / {
proxy_pass http://whoami:80;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
server_tokens off; hides the Nginx version from response headers. Version disclosure helps attackers target known vulnerabilities.
Create docker-compose.yml:
services:
nginx:
image: nginx:1.28
container_name: nginx
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/conf.d:/etc/nginx/conf.d:ro
- certbot_webroot:/var/www/certbot:ro
- certbot_certs:/etc/letsencrypt:ro
networks:
- proxy
depends_on:
- whoami
certbot:
image: certbot/certbot
container_name: certbot
restart: unless-stopped
volumes:
- certbot_webroot:/var/www/certbot
- certbot_certs:/etc/letsencrypt
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
whoami:
image: traefik/whoami
container_name: whoami
restart: unless-stopped
networks:
- proxy
networks:
proxy:
external: true
volumes:
certbot_webroot:
certbot_certs:
Nginx requires the certificate files to exist before it starts. The config above references /etc/letsencrypt/live/app.example.com/fullchain.pem, which doesn't exist yet. For the initial certificate, temporarily replace app.conf with an HTTP-only version:
server {
listen 80;
server_name app.example.com;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$host$request_uri;
}
}
Start Nginx and the backend:
docker compose up -d nginx whoami
Request the initial certificate:
docker compose run --rm certbot certonly \
--webroot \
--webroot-path=/var/www/certbot \
-d app.example.com \
--email you@example.com \
--agree-tos \
--no-eff-email
Once the certificate is obtained, restore the full app.conf (the version with the SSL server block shown above), then bring up the full stack:
docker compose up -d
Check all containers are running:
docker compose ps
Test from your local machine:
curl -v https://app.example.com
The server: response header should show nginx without a version number, which tells you server_tokens off is active.
To add another service, create a new .conf file in nginx/conf.d/, then reload:
docker compose exec nginx nginx -s reload
For certificate renewal, the Certbot container runs certbot renew every 12 hours. After renewal, reload Nginx to pick up the new certificates. Automate this with a cron job or a script that checks certificate modification times. For a deeper look at Nginx reverse proxy config, see How to Configure Nginx as a Reverse Proxy.
How do Traefik, Caddy, and Nginx compare for Docker reverse proxying?
Traefik wins on auto-discovery. Caddy wins on simplicity. Nginx wins on control. The table below breaks down the tradeoffs that matter when running Docker containers on a VPS.
| Feature | Traefik v3 | Caddy 2.11 | Nginx 1.28 |
|---|---|---|---|
| Auto-discovery | Yes (Docker labels) | No (manual Caddyfile) | No (manual conf files) |
| TLS automation | Built-in ACME | Built-in ACME | Requires Certbot sidecar |
| Config method | Docker labels + static YAML/CLI | Caddyfile or JSON API | nginx.conf files |
| Config reload | Automatic on container events | caddy reload (zero downtime) |
nginx -s reload (zero downtime) |
| Docker socket required | Yes (or socket proxy) | No | No |
| HTTP/3 (QUIC) | Experimental | Yes (default) | Via third-party module |
| Middleware/plugins | Built-in (rate limit, auth, headers) | Built-in + Go plugins | Via config directives |
| Community/docs | Large, active, good docs | Smaller, excellent docs | Largest, extensive docs |
| Learning curve | Medium (labels + static config) | Low (Caddyfile is intuitive) | High (many directives) |
Which reverse proxy uses the least memory?
Idle memory usage matters on a VPS where every megabyte counts. These numbers come from docker stats --no-stream on a Virtua Cloud 4 vCPU / 8 GB RAM VPS running Ubuntu 24.04. Each proxy ran idle with no traffic before measurement.
| Proxy | Idle RAM | Image Size |
|---|---|---|
| Traefik v3.6 | ~17 MB | ~242 MB |
| Caddy 2.11 | ~14 MB | ~88 MB |
| Nginx 1.28 | ~5 MB | ~240 MB |
| Nginx + Certbot | ~5 MB + ~25 MB | ~240 MB + ~298 MB |
Nginx uses the least memory by far. Caddy sits in the middle. Traefik's higher memory comes from maintaining the Docker provider state and routing table in memory. All three use the default (Debian/Alpine-based) images. Alpine variants would reduce image sizes at the cost of potential compatibility issues with certain extensions.
Under light load (100 concurrent requests via wrk), all three handle the traffic without meaningful CPU or memory increase on this VPS size. The differences only matter at scale or on the smallest VPS plans.
How do you choose the right reverse proxy for your Docker setup?
The right choice depends on how many services you run, how often they change, and what you already know.
Choose Traefik when:
- You run many containers that change frequently (adding/removing services weekly)
- You want zero-touch routing: deploy a container with labels and it's live
- You use Docker Swarm or need service discovery across multiple nodes
- You accept the Docker socket exposure (with a socket proxy for production)
Choose Caddy when:
- You run a few services that rarely change
- You want the simplest path to automatic HTTPS
- You don't want to mount the Docker socket
- You value a small image size and low memory footprint
- You want HTTP/3 support without extra configuration
Choose Nginx when:
- You already know Nginx configuration
- You need fine-grained control over proxy behavior (buffers, caching, custom headers per location)
- You want the lowest possible memory usage
- Your infrastructure team has existing Nginx tooling and monitoring
- You don't mind managing Certbot separately
Decision flowchart:
- Do you run more than 5 Docker services that change regularly? Yes -> Traefik
- Do you need fine-grained proxy tuning or already use Nginx? Yes -> Nginx
- Do you want the fewest moving parts and fastest setup? Yes -> Caddy
For most indie hackers deploying a side project or two, Caddy is the best starting point. For DevOps teams managing a fleet of containers, Traefik's auto-discovery pays for itself. For teams already running Nginx elsewhere, sticking with Nginx keeps your stack consistent Docker Networking on a VPS: Bridge, Host, and Macvlan Explained.
Security hardening for all three proxies
Whichever proxy you pick, apply these baseline security practices.
Security headers. All three examples above include HSTS, X-Content-Type-Options, X-Frame-Options, and Referrer-Policy. For Traefik, add them as middleware labels:
labels:
- "traefik.http.middlewares.security-headers.headers.stsSeconds=31536000"
- "traefik.http.middlewares.security-headers.headers.stsIncludeSubdomains=true"
- "traefik.http.middlewares.security-headers.headers.contentTypeNosniff=true"
- "traefik.http.middlewares.security-headers.headers.frameDeny=true"
- "traefik.http.routers.whoami.middlewares=security-headers"
Rate limiting. Traefik has built-in rate limiting middleware. Caddy has a rate_limit directive available as a plugin. Nginx uses limit_req_zone in its config. Rate limiting protects your backend from brute-force attacks and abuse.
Docker network isolation. Every example uses an external proxy network. Backend services should not be on the default bridge network. Only containers that need to be proxied join the proxy network. Database containers and internal services stay on separate, internal networks Docker Security Hardening: Rootless Mode, Seccomp, AppArmor on a VPS.
Firewall. Only ports 80 and 443 should be publicly accessible. Docker manipulates iptables directly, which can bypass UFW rules. See Fix Docker Bypassing UFW: 4 Tested Solutions for Your VPS for the fix.
Logs. Check proxy logs when something goes wrong:
# Traefik
docker logs traefik -f
# Caddy
docker logs caddy -f
# Nginx
docker logs nginx -f
For Traefik, set --log.level=DEBUG temporarily to diagnose routing or certificate issues. For Caddy, set the debug global option in the Caddyfile. For Nginx, check error.log inside the container at /var/log/nginx/error.log.
Something went wrong?
| Symptom | Likely cause | Fix |
|---|---|---|
| Certificate not issued | DNS A record not pointing to VPS IP | Verify with dig app.example.com |
| Traefik 404 on all routes | Container not on the proxy network |
Check docker network inspect proxy |
| Caddy "permission denied" on port 80 | Missing NET_BIND_SERVICE capability |
Add cap_add: NET_BIND_SERVICE |
| Nginx "no such file" for certificate | Certbot hasn't run yet | Run certbot certonly first |
ERR_CONNECTION_REFUSED |
Firewall blocking 80/443 | Check ufw status or iptables -L |
Traefik acme.json permission error |
File permissions too open | Run chmod 600 acme.json |
| Proxy works on server, fails externally | Testing on localhost only | Test with curl from your local machine |
For production hardening beyond reverse proxying, see Docker Compose Resource Limits, Healthchecks, and Restart Policies for resource limits and health checks on your Compose stacks.
Copyright 2026 Virtua.Cloud. All rights reserved. This content is original work by the Virtua.Cloud team. Reproduction, republication, or redistribution without written permission is prohibited.
Ready to try it yourself?
Deploy your own server in seconds. Linux, Windows, or FreeBSD.
See VPS Plans