Docker Update Strategy: Zero-Downtime Container Updates on a VPS
Four escalating methods to update Docker containers on a VPS, from simple pull-and-replace to zero-downtime blue-green deployments with Traefik. Covers image pinning, rollback procedures, Diun notifications, and docker-rollout.
Updating Docker containers on a VPS does not have to mean downtime. The right approach depends on what you're running and how much interruption you can tolerate. A personal blog can handle a few seconds of downtime during docker compose up -d. A SaaS product serving paying customers cannot.
This guide covers four methods, from simplest to most resilient. Each builds on the previous one. Start with what fits your situation and graduate to the next level when you need it.
Prerequisites: A VPS running Debian 12 or Ubuntu 24.04 with Docker Engine 27+ and Docker Compose v2 installed. All commands use the docker compose plugin syntax (not the deprecated docker-compose v1 binary). Docker in Production on a VPS: What Breaks and How to Fix It
How do I pin Docker images to a specific version?
Pin your images to a specific minor or patch version in your compose file. The latest tag is a moving target that can pull breaking changes without warning. Pinning gives you control over when updates happen and makes rollback possible by keeping the previous image locally.
Different tag strategies carry different risks:
| Tag format | Example | Risk level | Update behavior |
|---|---|---|---|
latest |
nginx:latest |
High | Any version, any time. You cannot tell what changed. |
| Major only | nginx:1 |
Medium-high | Could jump from 1.25 to 1.27. Minor versions may change behavior. |
| Minor | nginx:1.27 |
Low | Gets patch updates (1.27.0 to 1.27.3). Safe for most workloads. |
| Patch | nginx:1.27.3 |
Very low | Exact version. No surprise updates. You update manually. |
| Digest | nginx:1.27.3@sha256:6f12... |
Lowest | Byte-identical image every time. Immune to tag mutation. |
For most production services, pin to minor version (image: postgres:16.6). This balances security patches with stability. For services where reproducibility matters (CI, regulated environments), pin to the full digest.
services:
app:
image: myapp:2.4.1
# Not: image: myapp:latest
db:
image: postgres:16.6
Record your current image digests before updating. You will need them for rollback:
docker image inspect --format='{{index .RepoDigests 0}}' $(docker compose images app -q)
myapp@sha256:a1b2c3d4e5f6...
How do I set up health checks in Docker Compose?
Health checks tell Docker whether your container is actually working, not just running. Every zero-downtime pattern depends on them. Without a health check, Docker has no way to know if the new container is ready before removing the old one.
Add a healthcheck block to each service in your compose file. The test command runs inside the container at the specified interval. Docker marks the container as healthy only after the test passes.
services:
app:
image: myapp:2.4.1
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
interval: 15s
timeout: 5s
retries: 3
start_period: 30s
What each field does:
- test: The command to run.
CMDruns it directly. UseCMD-SHELLif you need shell features like pipes. - interval: Time between checks. 15s is reasonable for web services.
- timeout: How long to wait for the command to finish before considering it failed.
- retries: Number of consecutive failures before Docker marks the container
unhealthy. - start_period: Grace period after container start. Health checks during this window do not count toward the failure threshold. Set it long enough for your app to boot.
For services that do not have curl installed, use whatever built-in check the service offers:
db:
image: postgres:16.6
healthcheck:
test: ["CMD-SHELL", "pg_isready -U myuser -d mydb"]
interval: 10s
timeout: 5s
retries: 5
start_period: 20s
cache:
image: redis:7.4
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 3
After starting your services, check that health checks are passing:
docker compose ps
NAME IMAGE STATUS PORTS
app myapp:2.4.1 Up 2 minutes (healthy) 0.0.0.0:8080->8080/tcp
db postgres:16.6 Up 2 minutes (healthy) 5432/tcp
The (healthy) status means your health check is configured and passing. If you see (health: starting), the container is still in its start_period. If (unhealthy), check the health check logs:
docker inspect --format='{{json .State.Health}}' $(docker compose ps -q app) | python3 -m json.tool
Docker Compose Resource Limits, Healthchecks, and Restart Policies
How do I update a Docker container on my VPS?
Run docker compose pull to fetch the new image, then docker compose up -d to replace the container. Docker Compose stops the old container, removes it, and starts a new one from the updated image. This causes a brief interruption (typically 2-10 seconds) while the new container starts and passes its health check.
Step-by-step: the simple update
Before updating anything, back up your volumes. A broken update with corrupted data is far worse than a few seconds of downtime. Docker Volume Backup and Restore on a VPS
Read the changelog for the new version. Check for breaking changes, deprecated config options, and required migration steps. This takes five minutes and saves hours of debugging.
# Pull the new image
docker compose pull app
# Check what changed
docker compose up -d --dry-run
The --dry-run flag (Docker Compose v2.20+) shows what Compose will do without actually doing it. You will see which containers get recreated:
DRY RUN MODE - service "app" - Pull
DRY RUN MODE - Container app-1 - Recreate
DRY RUN MODE - Container app-1 - Started
Apply the update:
docker compose up -d app
[+] Running 1/1
✔ Container app-1 Started 0.8s
Check that the new container is healthy:
docker compose ps app
NAME IMAGE STATUS PORTS
app myapp:2.5.0 Up 15 seconds (healthy) 0.0.0.0:8080->8080/tcp
Then test from outside the server to make sure the service is reachable:
curl -s -o /dev/null -w "%{http_code}" https://app.example.com/health
200
When to update your Docker containers?
Not all updates carry the same urgency. A one-size-fits-all update schedule leads to either unnecessary risk or missed security patches.
- Security patches (CVEs): Apply immediately. Subscribe to your images' security advisories. A known CVE in a public-facing container gets exploited within hours of disclosure, not days.
- Patch versions (e.g., 2.4.1 to 2.4.2): Schedule weekly or biweekly. These are bug fixes. Read the changelog, update, verify.
- Minor versions (e.g., 2.4 to 2.5): Schedule monthly. Test in a staging environment first if you have one. Review the changelog for behavior changes.
- Major versions (e.g., 2.x to 3.x): Plan and test. Major versions break things. Read the migration guide. Test on a separate VPS or local environment before touching production.
How do I roll back a Docker container to a previous image?
Docker Compose has no built-in rollback command. To revert: edit your compose file to pin the previous image tag or digest, then run docker compose up -d. The container restarts with the old image. This works only if you kept the old image locally (do not run docker image prune right after updating).
Step-by-step rollback
Assume you updated myapp from 2.4.1 to 2.5.0 and the new version is broken.
- Check that the old image is still available locally:
docker images myapp
REPOSITORY TAG IMAGE ID CREATED SIZE
myapp 2.5.0 abc123def456 2 hours ago 185MB
myapp 2.4.1 789fed654cba 2 weeks ago 182MB
- Edit your compose file to pin the previous version:
services:
app:
image: myapp:2.4.1
- Roll back:
docker compose up -d app
[+] Running 1/1
✔ Container app-1 Started 0.7s
- Check the rollback worked:
docker compose ps app
NAME IMAGE STATUS PORTS
app myapp:2.4.1 Up 10 seconds (healthy) 0.0.0.0:8080->8080/tcp
If you already pruned the old image, Docker will pull it again from the registry (assuming the tag still exists). For maximum safety, note the full digest (sha256:...) before updating. Digests are immutable. Tags can be overwritten.
Automate it with a pre-update script
Save the current state before every update so rollback is always one command away:
#!/bin/bash
# save-state.sh - Run before every update
COMPOSE_FILE="${1:-docker-compose.yml}"
DATE=$(date +%Y%m%d-%H%M%S)
BACKUP_DIR="./rollback/${DATE}"
mkdir -p "${BACKUP_DIR}"
cp "${COMPOSE_FILE}" "${BACKUP_DIR}/"
docker compose ps --format json > "${BACKUP_DIR}/containers.json"
docker compose images --format json > "${BACKUP_DIR}/images.json"
echo "State saved to ${BACKUP_DIR}"
chmod 700 save-state.sh
Is Watchtower still maintained in 2026?
Watchtower was archived on December 17, 2025. The maintainers no longer use Docker and stopped development. The last release is v1.7.1. More importantly, Watchtower's Docker SDK uses API v1.25, but Docker Engine 29 raised the minimum API version to v1.44. Watchtower is incompatible with current Docker versions unless you manually lower the API minimum with DOCKER_MIN_API_VERSION=1.25 in your daemon config. That is a workaround, not a solution.
If you are running Watchtower today, plan to migrate. For automated update notifications without automatic restarts, use Diun. For automated updates with zero-downtime, use docker-rollout behind a reverse proxy.
How does Diun notify you about Docker image updates?
Diun (Docker Image Update Notifier) watches your Docker registries and sends notifications when new image versions are available. It does not update containers. It tells you an update exists so you can review the changelog and update on your own terms. This is the "know before you act" approach.
Add Diun to your existing compose file or create a dedicated one:
services:
diun:
image: crazymax/diun:4
command: serve
volumes:
- "diun-data:/data"
- "/var/run/docker.sock:/var/run/docker.sock:ro"
environment:
TZ: "Europe/Berlin"
DIUN_WATCH_WORKERS: "10"
DIUN_WATCH_SCHEDULE: "0 6 * * *"
DIUN_PROVIDERS_DOCKER: "true"
DIUN_PROVIDERS_DOCKER_WATCHBYDEFAULT: "true"
DIUN_NOTIF_SLACK_WEBHOOKURL_FILE: "/run/secrets/slack_webhook"
secrets:
- slack_webhook
restart: unless-stopped
secrets:
slack_webhook:
file: ./secrets/slack_webhook.txt
volumes:
diun-data:
The Slack webhook URL goes in a secrets file, not an environment variable, because Docker secrets keeps it out of docker inspect output and process listings. Create the secrets file with restricted permissions:
mkdir -p secrets
echo "https://hooks.slack.com/services/YOUR/WEBHOOK/URL" > secrets/slack_webhook.txt
chmod 600 secrets/slack_webhook.txt
Key settings explained:
- DIUN_WATCH_SCHEDULE: Cron expression.
0 6 * * *checks daily at 06:00. Adjust to your maintenance window. - DIUN_PROVIDERS_DOCKER_WATCHBYDEFAULT: When
true, Diun watches all running containers. Set tofalseand use labels for selective monitoring. - Docker socket mount: Read-only (
:ro) because Diun only reads container metadata. It never starts or stops containers.
For selective monitoring (recommended for larger stacks), set WATCHBYDEFAULT to false and add labels to containers you want to monitor:
services:
app:
image: myapp:2.4.1
labels:
- "diun.enable=true"
- "diun.watch_repo=true"
Start Diun and check the logs:
docker compose up -d diun
docker compose logs diun --tail 20
diun | Thu, 19 Mar 2026 06:00:01 CET INF Starting Diun version=v4.31.0
diun | Thu, 19 Mar 2026 06:00:01 CET INF Configuration loaded from 5 environment variable(s)
diun | Thu, 19 Mar 2026 06:00:02 CET INF Cron triggered
diun | Thu, 19 Mar 2026 06:00:03 CET INF New image found image=docker.io/myapp:2.5.0 provider=docker
When Diun finds a new image, it sends a Slack message with the image name, current tag, and new tag. You decide whether and when to update.
How does docker-rollout achieve zero-downtime updates?
docker-rollout is a Docker CLI plugin that performs blue-green deployments for Compose services. It starts a new container from the updated image, waits for the health check to pass, then removes the old container. Traffic never hits an unhealthy container because the reverse proxy routes to healthy containers only.
Requirements:
- A reverse proxy (Traefik, Caddy, or nginx-proxy) routing traffic to your service
- Health checks defined in your compose file
- No
container_namedirective on the service (docker-rollout manages container names) - No direct
portsmapping on the service (the reverse proxy handles port exposure)
Traefik vs Caddy vs Nginx: Docker Reverse Proxy Compared
Install docker-rollout
mkdir -p /usr/local/lib/docker/cli-plugins
curl -fsSL https://raw.githubusercontent.com/wowu/docker-rollout/main/docker-rollout \
-o /usr/local/lib/docker/cli-plugins/docker-rollout
chmod +x /usr/local/lib/docker/cli-plugins/docker-rollout
It should report its version when installed:
docker rollout --version
docker-rollout version v0.13
Example: Traefik + docker-rollout
A minimal compose file for a web app behind Traefik with health checks. The app has no ports or container_name because docker-rollout needs to manage scaling.
services:
traefik:
image: traefik:3.3
command:
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
ports:
- "80:80"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
restart: unless-stopped
app:
image: myapp:2.4.1
labels:
- "traefik.enable=true"
- "traefik.http.routers.app.rule=Host(`app.example.com`)"
- "traefik.http.routers.app.entrypoints=web"
- "traefik.http.services.app.loadbalancer.server.port=8080"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
interval: 15s
timeout: 5s
retries: 3
start_period: 30s
restart: unless-stopped
Deploy with zero downtime
Pull the new image, then use docker rollout instead of docker compose up -d:
docker compose pull app
docker rollout app
==> Scaling 'app' to '2' instances
Container myproject-app-2 Creating
Container myproject-app-2 Created
Container myproject-app-2 Starting
Container myproject-app-2 Started
==> Waiting for new containers to be healthy (timeout: 60 seconds)
==> Stopping and removing old containers
During this process, Traefik detects the new container through the Docker socket, routes traffic to it once healthy, and stops routing to the old container before it is removed. Your users see no interruption.
If the new container fails its health check, docker-rollout aborts and the old container keeps running. No manual intervention needed.
What is blue-green deployment with Docker and Traefik?
Blue-green deployment runs two copies of your service (blue and green). One serves live traffic while the other sits idle. To deploy, you update the idle copy, verify it works, then switch traffic. This gives you instant rollback by switching back to the previous copy.
This is the concept behind docker-rollout, but you can implement it manually for more control. A minimal example using Traefik dynamic configuration:
services:
traefik:
image: traefik:3.3
command:
- "--providers.file.directory=/etc/traefik/dynamic"
- "--providers.file.watch=true"
- "--entrypoints.web.address=:80"
ports:
- "80:80"
volumes:
- "./traefik/dynamic:/etc/traefik/dynamic:ro"
restart: unless-stopped
app-blue:
image: myapp:2.4.1
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
interval: 15s
timeout: 5s
retries: 3
start_period: 30s
app-green:
image: myapp:2.4.1
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
interval: 15s
timeout: 5s
retries: 3
start_period: 30s
A Traefik dynamic config file controls which copy receives traffic:
# traefik/dynamic/app.yml
http:
routers:
app:
rule: "Host(`app.example.com`)"
service: app
entryPoints:
- web
services:
app:
loadBalancer:
servers:
- url: "http://app-blue:8080"
To deploy: update app-green's image, start it, wait for it to be healthy, then edit app.yml to point to app-green. Traefik picks up the change automatically because watch=true. To roll back, edit the file to point back to app-blue.
This approach is more work than docker-rollout. Use it when you need explicit control over the switch, want to run smoke tests against the new version before switching traffic, or need multiple services to switch together.
Which update method should I use?
Pick the method that matches your downtime tolerance and infrastructure complexity.
| Method | Downtime | Complexity | Rollback | Reverse proxy required | Best for |
|---|---|---|---|---|---|
docker compose pull + up |
2-10 seconds | Low | Manual (edit compose file) | No | Personal projects, internal tools |
| Diun + manual update | Same as above | Low | Same as above | No | Teams that want visibility before updating |
| docker-rollout | None | Medium | Automatic (aborts on failure) | Yes | Production services on a single VPS |
| Blue-green (manual) | None | High | Instant (switch config file) | Yes | Multi-service stacks, regulated environments |
Decision flow:
- Is 2-10 seconds of downtime acceptable? Use
docker compose pull && docker compose up -d. - Do you want to know about updates before applying them? Add Diun.
- Is zero downtime required? Do you have a reverse proxy? Use docker-rollout.
- Need explicit control over the traffic switch? Implement blue-green manually.
Something went wrong?
Container starts but shows (unhealthy)
Check the health check command. Run it manually inside the container:
docker compose exec app curl -f http://localhost:8080/health
If this fails, the issue is in your application, not Docker. Check application logs:
docker compose logs app --tail 50
Old image was pruned, cannot roll back
If the tag still exists on the registry, docker compose pull will fetch it. If you pinned by digest, Docker pulls the exact image regardless of tag changes:
image: myapp:2.4.1@sha256:789fed654cba...
docker-rollout hangs during deployment
The health check is not passing within the timeout. Check the health check interval and retries. Increase the timeout:
docker rollout -t 120 app
Watchtower stopped working after Docker update
Docker Engine 29 requires API v1.44 minimum. Watchtower uses API v1.25. Migrate to Diun for notifications or docker-rollout for automated zero-downtime updates.
Diun is not detecting new images
Check the cron schedule in DIUN_WATCH_SCHEDULE. Trigger a manual scan:
docker compose exec diun diun image list
Check Diun logs for registry authentication errors:
docker compose logs diun --tail 30
Copyright 2026 Virtua.Cloud. All rights reserved. This content is original work by the Virtua.Cloud team. Reproduction, republication, or redistribution without written permission is prohibited.
Ready to try it yourself?
Deploy your own server in seconds. Linux, Windows, or FreeBSD.
See VPS Plans