Docker Log Rotation: Stop Logs from Filling Your VPS Disk
Docker's default log driver stores unlimited data. A single busy container can fill a 50GB VPS disk in days. This tutorial configures global log rotation, per-service Compose overrides, automated cleanup, and disk usage monitoring.
Docker's default logging configuration has no size limit. Every line your containers write to stdout or stderr gets stored on disk forever. On a VPS with 25GB or 50GB of storage, a single chatty container can eat all available space in days.
This tutorial fixes that. You will configure global log rotation, set per-service limits in Docker Compose, automate disk cleanup, and set up monitoring so the problem never catches you off guard.
All commands are tested on Debian 12 and Ubuntu 24.04 with Docker Engine 28.x/29.x.
Prerequisites:
- A VPS running Debian 12 or Ubuntu 24.04 with Docker installed
- SSH access with a sudo user
- Basic familiarity with Docker and Docker Compose
Why do Docker container logs fill up your disk?
Docker's default log driver is json-file. It captures everything a container writes to stdout and stderr, then stores it as JSON in /var/lib/docker/containers/<container-id>/<container-id>-json.log. By default, there is no maximum size and no rotation. The file grows until your disk is full.
A Node.js app logging at INFO level produces roughly 50MB per day. A reverse proxy under moderate traffic can generate 200MB or more. On a 50GB VPS, that means a single unmanaged container can consume all free space within weeks.
When the disk fills, everything breaks at once: containers cannot write, databases crash, SSH sessions may freeze, and you cannot even log in to fix it.
How to check current disk usage
Before changing anything, measure the damage:
df -h /var/lib/docker
This shows how much space Docker's data directory uses. Then get a Docker-specific breakdown:
docker system df
Expected output:
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 5 3 1.2GB 450MB (37%)
Containers 8 4 3.8GB 3.1GB (81%)
Local Volumes 3 2 500MB 120MB (24%)
Build Cache 12 0 800MB 800MB (100%)
The "Containers" row shows log file sizes. If that number is disproportionately large, logs are the problem.
For a per-container breakdown:
docker system df -v
This lists every container with its log size. Find the offenders.
How to find the largest log files directly
If docker system df confirms the issue, find exactly which logs are consuming space:
sudo find /var/lib/docker/containers/ -name "*-json.log" -exec ls -sh {} + | sort -rh | head -10
This lists the 10 largest container log files with their sizes.
Emergency triage: reclaim disk space now
If your disk is already full or nearly full, fix the immediate problem before configuring rotation.
Truncate a specific container's log file (without stopping the container):
sudo truncate -s 0 /var/lib/docker/containers/<container-id>/<container-id>-json.log
Replace <container-id> with the actual container ID from docker ps --no-trunc -q.
To truncate all Docker log files at once:
sudo sh -c 'truncate -s 0 /var/lib/docker/containers/*/*-json.log'
Verify the space was recovered:
df -h /var/lib/docker
This is a temporary fix. The logs will grow back. The next sections make the fix permanent.
How do you configure log rotation in Docker daemon.json?
The daemon.json file sets default logging options for all new containers. Docker's json-file driver supports max-size (maximum size per log file before rotation) and max-file (number of rotated files to keep). All values in log-opts must be strings, even numbers.
Create or edit the daemon configuration:
sudo nano /etc/docker/daemon.json
If the file does not exist, create it. If it already contains configuration (like custom registries or DNS), add the log-driver and log-opts keys alongside the existing ones.
Sizing recommendations per VPS disk
Pick values based on your VPS disk size:
| VPS Disk Size | max-size |
max-file |
Max Log per Container | Rationale |
|---|---|---|---|---|
| 25 GB | 5m |
3 |
15 MB | Tight disk; keep logs minimal |
| 50 GB | 10m |
5 |
50 MB | Standard VPS; balanced retention |
| 100 GB | 25m |
5 |
125 MB | Generous disk; longer retention |
| 200 GB+ | 50m |
5 |
250 MB | Large disk; extended debugging |
For a 50GB VPS (the most common choice), use this configuration:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "5"
}
}
Save the file, then restart Docker:
sudo systemctl restart docker
Verify the new defaults are active:
docker info --format '{{.LoggingDriver}}'
Expected output:
json-file
Confirm the log options apply to new containers by running one and inspecting it:
docker run -d --name log-test alpine echo "test" && docker inspect --format '{{.HostConfig.LogConfig}}' log-test && docker rm log-test
Expected output:
{json-file map[max-file:5 max-size:10m]}
What happens to running containers after changing daemon.json?
Existing containers keep their original log configuration. The new settings only apply to containers created after the restart. Running containers must be recreated to pick up the new defaults.
If you use Docker Compose:
docker compose down && docker compose up -d
For standalone containers, stop and remove them, then start fresh. The --force-recreate flag also works:
docker compose up -d --force-recreate
Verify a specific container uses the new config:
docker inspect --format '{{.HostConfig.LogConfig}}' <container-name>
Expected output:
{json-file map[max-file:5 max-size:10m]}
How do you set log limits in Docker Compose?
Per-service log configuration in Docker Compose overrides the daemon.json defaults. This lets you give chatty services tighter limits and quiet services more headroom.
Add the logging block under any service:
services:
web:
image: nginx:alpine
logging:
driver: json-file
options:
max-size: "5m"
max-file: "3"
ports:
- "80:80"
api:
image: node:22-alpine
logging:
driver: json-file
options:
max-size: "20m"
max-file: "5"
ports:
- "3000:3000"
To avoid repeating the logging block in every service, use a YAML anchor:
x-logging: &default-logging
logging:
driver: json-file
options:
max-size: "10m"
max-file: "5"
services:
web:
image: nginx:alpine
<<: *default-logging
ports:
- "80:80"
api:
image: node:22-alpine
<<: *default-logging
ports:
- "3000:3000"
worker:
image: myapp/worker:latest
logging:
driver: json-file
options:
max-size: "50m"
max-file: "3"
The worker service overrides the anchor with its own limits. Every other service gets the shared config.
Apply the configuration:
docker compose up -d --force-recreate
Check it works:
docker inspect --format '{{.HostConfig.LogConfig}}' web
What is the difference between json-file, local, and journald log drivers?
Docker ships three log drivers that store logs on the host. Each has different tradeoffs. The json-file driver is the default. The local driver is Docker's recommended replacement for disk efficiency. The journald driver integrates with systemd's journal.
| Feature | json-file | local | journald |
|---|---|---|---|
| Default rotation | None | Yes (100MB total) | Managed by journald |
| Compression | Optional (compress: "true") |
Enabled by default | Managed by journald |
docker logs support |
Yes | Yes | Yes |
| Log format | JSON (human-readable) | Binary (internal) | Binary (journald) |
| External tool access | Easy (plain text files) | Not supported | Via journalctl |
| Default max-size | Unlimited | 20MB per file | Set in journald.conf |
| Default max-file | 1 (no rotation) | 5 files | N/A |
When to use each driver
json-file is the safe default. It works everywhere, supports docker logs, and the log files are plain JSON that any tool can parse. Add max-size and max-file and it works well for most VPS setups.
local is better for disk efficiency. Compression is on by default, rotation is built in (5 files of 20MB = 100MB per container), and you do not need to configure anything. The tradeoff: log files use an internal binary format. External log shippers that read files directly (like Filebeat in file mode) cannot parse them. If you only read logs through docker logs or ship them via a Docker logging plugin, switch to local.
journald is the right choice if you already use systemd's journal for all your other services and want container logs in the same place. Rotation is handled by journald's own configuration (/etc/systemd/journald.conf). You read logs with journalctl instead of docker logs (though docker logs still works).
How to switch to the local driver
Edit /etc/docker/daemon.json:
{
"log-driver": "local",
"log-opts": {
"max-size": "10m",
"max-file": "5"
}
}
Restart Docker:
sudo systemctl restart docker
Check it works:
docker info --format '{{.LoggingDriver}}'
Expected output:
local
Recreate containers to apply the new driver:
docker compose up -d --force-recreate
How to use the journald driver
Edit /etc/docker/daemon.json:
{
"log-driver": "journald"
}
Restart Docker:
sudo systemctl restart docker
Read logs for a specific container:
sudo journalctl CONTAINER_NAME=mycontainer --no-pager -n 50
Follow logs in real time:
sudo journalctl CONTAINER_NAME=mycontainer -f
Journald rotation is controlled in /etc/systemd/journald.conf. Key settings:
[Journal]
SystemMaxUse=500M
SystemMaxFileSize=50M
MaxRetentionSec=7day
After editing, restart journald:
sudo systemctl restart systemd-journald
How do you automate Docker cleanup with cron or systemd timers?
Log rotation prevents individual containers from growing unbounded. But Docker also accumulates stopped containers, unused images, dangling build cache, and orphaned networks. docker system prune cleans these up.
What does docker system prune actually delete?
By default, docker system prune removes:
- All stopped containers
- All networks not used by any running container
- All dangling images (untagged images not referenced by any container)
- All unused build cache
It does not delete:
- Running containers
- Named volumes (your database data is safe)
- Tagged images that are still referenced
- Images used by running containers
The --all flag additionally removes all unused images (not just dangling ones). The --volumes flag adds anonymous volumes to the cleanup. Use --volumes with caution: it destroys data in anonymous volumes.
Option 1: cron job
Create a weekly prune job:
sudo crontab -e
Add:
0 3 * * 0 /usr/bin/docker system prune -f >> /var/log/docker-prune.log 2>&1
This runs every Sunday at 03:00. The -f flag skips the confirmation prompt. Output goes to a log file for auditing.
Verify the crontab was saved:
sudo crontab -l
Option 2: systemd timer (recommended)
Systemd timers are more reliable than cron. They log to the journal, handle missed runs (if the server was off), and are easier to monitor.
Create the service unit:
sudo nano /etc/systemd/system/docker-prune.service
[Unit]
Description=Docker system prune
Wants=docker.service
After=docker.service
[Service]
Type=oneshot
ExecStart=/usr/bin/docker system prune -f --filter "until=168h"
The --filter "until=168h" flag only prunes objects older than 7 days. This protects recently stopped containers you might want to inspect.
Create the timer:
sudo nano /etc/systemd/system/docker-prune.timer
[Unit]
Description=Run Docker prune weekly
[Timer]
OnCalendar=Sun *-*-* 03:00:00
Persistent=true
RandomizedDelaySec=1800
[Install]
WantedBy=timers.target
Persistent=true means if the server was off during the scheduled time, the job runs at next boot. RandomizedDelaySec spreads the load if you have multiple servers.
Enable and start the timer:
sudo systemctl daemon-reload
sudo systemctl enable --now docker-prune.timer
enable makes it survive reboots. --now starts it immediately.
Verify the timer is active:
sudo systemctl status docker-prune.timer
Check when it will run next:
sudo systemctl list-timers docker-prune.timer
Test it manually:
sudo systemctl start docker-prune.service
Check the result:
sudo journalctl -u docker-prune.service --no-pager -n 20
How do you clean up Docker volumes safely?
Volumes hold persistent data: databases, uploads, configuration. Be careful here.
List all volumes and their usage:
docker volume ls
Show only volumes not attached to any container:
docker volume ls -f dangling=true
Remove dangling volumes:
docker volume prune -f
This only removes volumes not currently used by any container (running or stopped). Named volumes attached to stopped containers are safe.
Verify what remains:
docker volume ls
Never run docker volume prune immediately after docker system prune --volumes. The system prune with --volumes already handles volume cleanup. Running both is redundant at best.
To remove a specific volume you have identified as unnecessary:
docker volume rm <volume-name>
Always check what data a volume holds before removing it:
docker volume inspect <volume-name>
The Mountpoint field shows where the data lives on disk. You can inspect its contents:
sudo ls -la $(docker volume inspect --format '{{.Mountpoint}}' <volume-name>)
How do you monitor Docker disk usage on a VPS?
Automated monitoring prevents surprises. This section sets up a threshold alert that checks Docker disk usage and sends a warning when it crosses a limit.
Quick manual check
Run these two commands whenever you want a snapshot:
df -h /var/lib/docker
docker system df
For detailed per-container and per-image breakdown:
docker system df -v
Automated alert script
Create a monitoring script:
sudo nano /usr/local/bin/docker-disk-alert.sh
#!/bin/bash
# Alert when Docker's partition exceeds a usage threshold
THRESHOLD=80
MAILTO="admin@example.com"
USAGE=$(df /var/lib/docker | awk 'NR==2 {gsub(/%/,""); print $5}')
if [ "$USAGE" -ge "$THRESHOLD" ]; then
DOCKER_DF=$(docker system df 2>&1)
DISK_DF=$(df -h /var/lib/docker 2>&1)
TOP_LOGS=$(find /var/lib/docker/containers/ -name "*-json.log" -exec ls -sh {} + 2>/dev/null | sort -rh | head -5)
BODY="Docker disk usage on $(hostname) is at ${USAGE}%.
Disk usage:
${DISK_DF}
Docker breakdown:
${DOCKER_DF}
Largest log files:
${TOP_LOGS}"
echo "$BODY" | mail -s "ALERT: Docker disk at ${USAGE}% on $(hostname)" "$MAILTO"
logger -t docker-disk-alert "Docker disk usage at ${USAGE}% - alert sent"
fi
Set permissions:
sudo chmod 750 /usr/local/bin/docker-disk-alert.sh
Verify permissions:
ls -la /usr/local/bin/docker-disk-alert.sh
Expected output:
-rwxr-x--- 1 root root 612 Mar 19 12:00 /usr/local/bin/docker-disk-alert.sh
The script requires mailutils (or mailx) for email delivery. Install it if not present:
sudo apt install -y mailutils
Test the script:
sudo /usr/local/bin/docker-disk-alert.sh
If your disk usage is below the threshold, nothing happens. To test the alert path, temporarily set THRESHOLD=1 in the script, run it, then set it back.
Schedule the alert with a systemd timer
Create the service:
sudo nano /etc/systemd/system/docker-disk-alert.service
[Unit]
Description=Check Docker disk usage
[Service]
Type=oneshot
ExecStart=/usr/local/bin/docker-disk-alert.sh
Create the timer:
sudo nano /etc/systemd/system/docker-disk-alert.timer
[Unit]
Description=Check Docker disk usage every 6 hours
[Timer]
OnCalendar=*-*-* 00/6:00:00
Persistent=true
[Install]
WantedBy=timers.target
Enable it:
sudo systemctl daemon-reload
sudo systemctl enable --now docker-disk-alert.timer
Check it works:
sudo systemctl list-timers docker-disk-alert.timer
Troubleshooting
Disk is full and Docker will not start
If Docker refuses to start because the disk is completely full:
sudo truncate -s 0 /var/lib/docker/containers/*/*-json.log
sudo systemctl start docker
Then immediately configure log rotation as described above.
daemon.json syntax error prevents Docker from starting
Docker will not start if daemon.json contains invalid JSON. Validate the file:
sudo python3 -m json.tool /etc/docker/daemon.json
If this prints the formatted JSON, the syntax is valid. If it prints an error, fix the indicated line.
Check Docker's error message:
sudo journalctl -u docker.service -n 20 --no-pager
Logs still growing after configuring rotation
Rotation only applies to new containers. Existing containers keep their original config. Recreate them:
docker compose up -d --force-recreate
Verify the new config took effect:
docker inspect --format '{{.HostConfig.LogConfig}}' <container-name>
docker system prune did not free much space
docker system prune does not touch running containers, their logs, or named volumes. If the space problem is specifically logs, truncate the log files or configure rotation. If it is volumes, use docker volume prune after verifying no needed data will be lost.
Check what is consuming space:
sudo du -sh /var/lib/docker/*
This breaks down usage by Docker subsystem: containers (logs), overlay2 (images/layers), volumes, and others.
Summary
A complete Docker log management setup on a VPS has four layers:
- Global rotation in
/etc/docker/daemon.jsonwithmax-sizeandmax-fileprevents unbounded log growth for all containers. - Per-service overrides in Docker Compose give chatty services tighter limits.
- Automated cleanup with
docker system pruneon a systemd timer removes dead containers, unused images, and build cache. - Disk monitoring with an alert script catches problems before they become outages.
After setting this up, verify it works: check docker inspect on your containers, run docker system df to confirm current usage, and wait for the prune timer to fire at least once. Check journalctl -u docker-prune.service to confirm it ran.
Copyright 2026 Virtua.Cloud. All rights reserved. This content is original work by the Virtua.Cloud team. Reproduction, republication, or redistribution without written permission is prohibited.