Docker Volume Backup and Restore on a VPS
Three backup strategies for Docker volumes on a VPS: tar snapshots, database-native dumps, and automated encrypted backups with offen/docker-volume-backup. Includes cron scheduling, off-site S3 copies with rclone, and a full restore test on a fresh server.
Docker named volumes hold your production data: databases, uploads, configuration state. The containers are disposable. The volumes are not. If a disk fails or you misconfigure a migration, the volumes are what you need back.
This guide covers three backup strategies, automates them with cron, ships copies off-server with rclone, and then proves the restore works by rebuilding on a fresh VPS. If you have not tested your restore, you do not have backups.
What you will learn:
- Tar-based volume snapshots (simple, universal)
- Database-native dumps with
pg_dumpandmysqldump(consistent, no downtime) - Automated encrypted backups with offen/docker-volume-backup (scheduled, S3-ready)
- Cron automation with retention policies
- Off-server copies to S3-compatible storage via rclone
- Full disaster recovery restore on a fresh VPS
Prerequisites
You need a VPS running Debian 12 or Ubuntu 24.04 with Docker and Docker Compose v2 installed. This guide assumes you have a running Compose stack with at least one named volume. If you need to set that up first, see Docker Compose for Multi-Service VPS Deployments.
Verify your setup:
docker --version
# Docker version 28.x or newer
docker compose version
# Docker Compose version v2.x or newer
Check your existing volumes:
docker volume ls
The output lists every named volume on the system. Identify which ones hold data you care about. Use docker system df -v to see how much space each volume uses. This helps estimate backup sizes and storage needs.
Create a backup directory with restricted permissions:
mkdir -p /opt/backups/docker
chmod 700 /opt/backups/docker
Only root can read this directory. Backups often contain database credentials, session tokens, or user data.
How do I backup Docker volumes on a VPS?
There are three strategies, each with different tradeoffs. Pick based on what your volumes contain and how much downtime you can tolerate.
| Method | Downtime | Data consistency | Automation | Encryption | Best for |
|---|---|---|---|---|---|
| Tar snapshot | Brief (container stopped) | Filesystem-level | Manual or cron script | No (add GPG separately) | Static files, uploads, config |
| Database dump | None | Transaction-consistent | Manual or cron script | No (add GPG separately) | PostgreSQL, MySQL, MariaDB |
| offen/docker-volume-backup | Optional (configurable) | Filesystem-level | Built-in scheduler | Built-in GPG | Any volume, hands-off operation |
How do I create a tar backup of a Docker volume?
Stop the container using the volume, run a temporary Alpine container to tar the volume contents, then restart. This takes seconds for most volumes and works with any data type.
1. Stop the container that writes to the volume:
# Replace "app" with your service name from docker-compose.yml
docker compose stop app
Stopping the writer prevents partial writes during the archive. For read-only volumes or static files, you can skip this step.
2. Create the tar archive:
docker run --rm \
-v myapp_data:/source:ro \
-v /opt/backups/docker:/backup \
alpine tar czf /backup/myapp_data-$(date +%Y%m%d-%H%M%S).tar.gz -C /source .
What this does: spins up a throwaway Alpine container, mounts your volume as read-only at /source, mounts the backup directory at /backup, and creates a gzip-compressed tar archive. The --rm flag deletes the container when it finishes. The :ro flag prevents the backup process from accidentally writing to your data.
3. Restart the container:
docker compose start app
4. Verify the archive:
ls -lh /opt/backups/docker/myapp_data-*.tar.gz
The archive appears with its file size. A 500 MB volume typically compresses to 60-120 MB depending on the data type.
List the archive contents to confirm the files are there:
tar tzf /opt/backups/docker/myapp_data-20260319-120000.tar.gz | head -20
The paths start with ./ (no leading directory name) because we used -C /source . in the tar command. This matters during restore.
How do I backup a PostgreSQL database running in Docker?
Use pg_dump inside the running container. This produces a transaction-consistent dump without stopping the database. The custom format (-Fc) compresses the output and supports selective restore.
docker compose exec -T postgres pg_dump \
-U "$POSTGRES_USER" \
-Fc \
--no-owner \
--no-acl \
mydb > /opt/backups/docker/mydb-$(date +%Y%m%d-%H%M%S).dump
What this does: exec -T runs the command inside the running postgres container without allocating a TTY (required for piping output). -Fc selects custom format, which is compressed and supports pg_restore. --no-owner and --no-acl make the dump portable across different database users.
The $POSTGRES_USER variable should come from your environment file, not hardcoded. If your Compose stack uses an env file:
source /opt/myapp/.env
docker compose exec -T postgres pg_dump \
-U "$POSTGRES_USER" \
-Fc \
--no-owner \
--no-acl \
"$POSTGRES_DB" > /opt/backups/docker/"$POSTGRES_DB"-$(date +%Y%m%d-%H%M%S).dump
Verify the dump by piping it back through the container's pg_restore:
docker compose exec -T postgres pg_restore --list < /opt/backups/docker/mydb-20260319-120000.dump | head -10
This prints the table of contents without restoring anything. If the file is corrupt, pg_restore will error out. We use docker compose exec -T because pg_restore lives inside the container, not on the host (unless you install postgresql-client separately).
How do I backup a MySQL database running in Docker?
Use mysqldump with --single-transaction for InnoDB tables. This gives you a consistent snapshot without locking the database.
docker compose exec -T mysql mysqldump \
-u root \
-p"$MYSQL_ROOT_PASSWORD" \
--single-transaction \
--routines \
--triggers \
mydb > /opt/backups/docker/mydb-$(date +%Y%m%d-%H%M%S).sql
The -p flag has no space before the password. --single-transaction uses a consistent read for InnoDB tables. --routines and --triggers include stored procedures and triggers that mysqldump skips by default.
Verify the dump is not empty and ends with the completion marker:
tail -5 /opt/backups/docker/mydb-20260319-120000.sql
The output ends with -- Dump completed on YYYY-MM-DD HH:MM:SS. If the file is truncated or empty, the dump failed.
How do I automate Docker volume backups with offen/docker-volume-backup?
offen/docker-volume-backup runs as a sidecar container in your Compose stack. It backs up mounted volumes on a schedule, optionally encrypts the archives with GPG, and can upload directly to S3-compatible storage. It can also stop containers during backup to ensure consistency.
Add the backup service to your docker-compose.yml:
services:
# ... your existing services ...
backup:
image: offen/docker-volume-backup:v2.47.2
restart: unless-stopped
env_file:
- ./backup.env
volumes:
- myapp_data:/backup/myapp_data:ro
- myapp_db:/backup/myapp_db:ro
- /opt/backups/docker:/archive
- /var/run/docker.sock:/var/run/docker.sock
labels:
- docker-volume-backup.stop-during-backup=false
What this does: mounts every volume you want backed up under /backup/ as read-only, mounts /archive for local backup storage, and mounts the Docker socket so the tool can stop and restart containers when configured to do so. The image tag v2.47.2 pins the version. Do not use latest in production.
Security note: mounting the Docker socket gives the backup container full control over Docker on the host. This is required for the stop-during-backup feature. If you do not need that feature, you can mount it read-only (/var/run/docker.sock:/var/run/docker.sock:ro), which still allows the tool to read container labels but prevents it from stopping or starting containers.
Create the environment file with restricted permissions:
touch /opt/myapp/backup.env
chmod 600 /opt/myapp/backup.env
# /opt/myapp/backup.env
BACKUP_CRON_EXPRESSION=0 3 * * *
BACKUP_RETENTION_DAYS=7
BACKUP_COMPRESSION=gz
BACKUP_FILENAME=backup-%Y%m%dT%H%M%S.tar.gz
# GPG encryption (generate a strong passphrase)
GPG_PASSPHRASE=your-generated-passphrase-here
# S3-compatible storage (optional, see rclone section for alternative)
# AWS_S3_BUCKET_NAME=my-backups
# AWS_S3_PATH=myapp
# AWS_ENDPOINT=s3.eu-central-1.amazonaws.com
# AWS_ACCESS_KEY_ID=
# AWS_SECRET_ACCESS_KEY=
Generate the GPG passphrase:
openssl rand -base64 32
Store this passphrase somewhere safe outside the server. If you lose it, the encrypted backups are unrecoverable.
If you want the backup tool to stop specific containers during backup for filesystem consistency, add a label to those services:
services:
app:
# ... your config ...
labels:
- docker-volume-backup.stop-during-backup=true
Start the backup service:
docker compose up -d backup
Verify it is running:
docker compose logs backup
A log line confirms the cron schedule. Wait for the first scheduled run, or trigger a manual backup:
docker compose exec backup backup
Check the archive appeared:
ls -lh /opt/backups/docker/
How do I schedule Docker backups with cron?
For tar-based and database dump strategies, a shell script with cron handles scheduling and retention. The offen tool has its own scheduler, so skip this section if you use only that.
Create the backup script:
touch /opt/backups/docker-backup.sh
chmod 700 /opt/backups/docker-backup.sh
#!/usr/bin/env bash
# /opt/backups/docker-backup.sh
# Backs up Docker volumes and databases, removes old archives.
set -euo pipefail
BACKUP_DIR="/opt/backups/docker"
RETENTION_DAYS=7
TIMESTAMP=$(date +%Y%m%d-%H%M%S)
COMPOSE_DIR="/opt/myapp"
# Load database credentials from env file
source "${COMPOSE_DIR}/.env"
cd "$COMPOSE_DIR"
# --- Tar backup of application data volume ---
docker compose stop app
docker run --rm \
-v myapp_data:/source:ro \
-v "${BACKUP_DIR}":/backup \
alpine tar czf "/backup/myapp_data-${TIMESTAMP}.tar.gz" -C /source .
docker compose start app
# --- PostgreSQL dump (no downtime) ---
docker compose exec -T postgres pg_dump \
-U "$POSTGRES_USER" \
-Fc --no-owner --no-acl \
"$POSTGRES_DB" > "${BACKUP_DIR}/${POSTGRES_DB}-${TIMESTAMP}.dump"
# --- Retention: delete backups older than N days ---
find "$BACKUP_DIR" -type f -name "*.tar.gz" -mtime +${RETENTION_DAYS} -delete
find "$BACKUP_DIR" -type f -name "*.dump" -mtime +${RETENTION_DAYS} -delete
echo "[$(date -Iseconds)] Backup completed successfully"
Verify the script runs without errors:
/opt/backups/docker-backup.sh
Check the output files:
ls -lh /opt/backups/docker/
Add a cron entry that runs at 03:00 daily and logs output:
crontab -e
0 3 * * * /opt/backups/docker-backup.sh >> /var/log/docker-backup.log 2>&1
The 2>&1 redirects stderr to the same log file, so failures are captured. Check the log after the first run:
cat /var/log/docker-backup.log
If the script fails, cron silently swallows the error unless you redirect output. To get email alerts on failure, add this wrapper:
0 3 * * * /opt/backups/docker-backup.sh >> /var/log/docker-backup.log 2>&1 || echo "Docker backup failed on $(hostname)" | mail -s "BACKUP FAILED" you@example.com
This requires mailutils or a similar package. Adjust the recipient address.
How do I copy Docker backups to S3-compatible storage with rclone?
Local backups protect against application failures. They do not protect against disk failures or a compromised server. You need off-server copies. rclone works with any S3-compatible storage: AWS S3, Backblaze B2, Wasabi, MinIO, OVH Object Storage, Scaleway, and others.
Install rclone:
apt update && apt install -y rclone
Configure an S3-compatible remote:
rclone config
Follow the interactive prompts:
nfor new remote- Name it
s3backup - Choose
s3(Amazon S3 Compliant Storage Providers) - Select your provider (or "Any other S3 compatible provider")
- Enter your access key and secret key
- Set the region and endpoint URL for your provider
- Leave other options at defaults
Verify the remote works:
rclone lsd s3backup:
This lists your buckets. If it fails, your credentials or endpoint are wrong.
Create a bucket for backups (if your provider supports it via rclone):
rclone mkdir s3backup:my-docker-backups
Sync your local backup directory to the bucket:
rclone sync /opt/backups/docker s3backup:my-docker-backups/$(hostname)/ \
--transfers 4 \
--checkers 8 \
--log-file /var/log/rclone-backup.log \
--log-level INFO
What this does: sync makes the remote match the local directory. Files deleted locally (by retention) get deleted remotely too. The $(hostname) prefix separates backups if you have multiple servers.
Verify the upload:
rclone ls s3backup:my-docker-backups/$(hostname)/
The backup files appear with sizes matching the local copies.
Add rclone sync to the backup script or as a separate cron entry that runs after the backup:
30 3 * * * rclone sync /opt/backups/docker s3backup:my-docker-backups/$(hostname)/ --transfers 4 --log-file /var/log/rclone-backup.log --log-level INFO
This runs at 03:30, giving the 03:00 backup job time to finish.
Protect the rclone config: it contains your S3 credentials.
chmod 600 ~/.config/rclone/rclone.conf
ls -la ~/.config/rclone/rclone.conf
You should see -rw------- permissions. Only root can read this file.
Should I stop containers before backing up Docker volumes?
This depends on what the volume contains. Getting this wrong is the most common cause of corrupted backups.
Databases (PostgreSQL, MySQL, MongoDB): Never tar a running database volume. The files on disk represent an in-progress transaction state. A tar of these files is like photocopying a book while someone is rewriting chapters. The result is internally inconsistent. Use pg_dump, mysqldump, or mongodump instead. These tools produce a transaction-consistent snapshot while the database keeps running.
Application data (uploads, static files, config): Tar is safe if the application can tolerate a brief stop. If the app writes continuously and you cannot stop it, the tar may contain partially written files. For most web apps, a 2-second stop during a 3 AM backup is acceptable.
Redis, key-value stores: Redis writes RDB snapshots to disk periodically. Trigger a BGSAVE before tarring the volume, then wait for it to finish. This gives you a consistent snapshot without stopping Redis.
docker compose exec redis redis-cli BGSAVE
# Wait a few seconds
docker compose exec redis redis-cli LASTSAVE
The safe default: if you are unsure, stop the container, back up, restart. Brief downtime beats corrupted backups.
How do I restore Docker volumes on a new VPS?
This is the procedure that proves your backups work. Install Docker on a fresh server, transfer the backup files, recreate volumes, restore data, and verify the application runs.
1. Install Docker on the fresh VPS
apt update && apt install -y ca-certificates curl
install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc
chmod a+r /etc/apt/keyrings/docker.asc
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] \
https://download.docker.com/linux/debian $(. /etc/os-release && echo "$VERSION_CODENAME") stable" \
| tee /etc/apt/sources.list.d/docker.list > /dev/null
apt update && apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Check it works:
docker --version && docker compose version
2. Transfer backup files to the new server
From your local machine or the old server:
rsync -avz --progress /opt/backups/docker/ root@NEW_SERVER_IP:/opt/backups/docker/
Or download from S3:
# On the new server, install and configure rclone first
apt install -y rclone
# Re-run rclone config with the same credentials
rclone copy s3backup:my-docker-backups/OLD_HOSTNAME/ /opt/backups/docker/
Verify the files arrived:
ls -lh /opt/backups/docker/
3. Copy your Compose files and env files
rsync -avz /opt/myapp/ root@NEW_SERVER_IP:/opt/myapp/
Or restore them from your Git repository. Your docker-compose.yml and .env should be version-controlled. The .env file should be in .gitignore and backed up separately.
4. Restore the tar-based volume
# Create the volume (Docker Compose will also do this on first `up`,
# but creating it explicitly lets us restore data before starting services)
docker volume create myapp_data
# Restore from archive
docker run --rm \
-v myapp_data:/target \
-v /opt/backups/docker:/backup:ro \
alpine sh -c "cd /target && tar xzf /backup/myapp_data-20260319-030000.tar.gz"
What this does: creates the named volume, then runs a temporary container that extracts the archive into it. The backup directory is mounted read-only to prevent accidents.
Verify the restored data:
docker run --rm -v myapp_data:/data:ro alpine ls -la /data/
You should see the same files that were in the original volume.
5. Restore the PostgreSQL database
Start only the database container:
cd /opt/myapp
docker compose up -d postgres
Wait for it to be ready:
docker compose logs -f postgres
# Wait until you see "database system is ready to accept connections"
Restore the dump:
docker compose exec -T postgres pg_restore \
-U "$POSTGRES_USER" \
-d "$POSTGRES_DB" \
--clean \
--if-exists \
--no-owner \
--no-acl \
< /opt/backups/docker/mydb-20260319-030000.dump
What this does: --clean drops existing objects before recreating them. --if-exists prevents errors if the objects do not exist yet. This makes the restore idempotent.
Verify the data:
docker compose exec postgres psql -U "$POSTGRES_USER" -d "$POSTGRES_DB" -c "\dt"
You should see your tables listed. Run a quick row count on a known table:
docker compose exec postgres psql -U "$POSTGRES_USER" -d "$POSTGRES_DB" -c "SELECT count(*) FROM your_table;"
6. Start all services and verify
docker compose up -d
Check all containers are running:
docker compose ps
Every service should show Up or running. If any container is restarting, check its logs:
docker compose logs --tail 50 service_name
Test the application from outside the server. From your local machine:
curl -I http://NEW_SERVER_IP:PORT
You should get a valid HTTP response. If the app has a health check endpoint, hit that:
curl http://NEW_SERVER_IP:PORT/health
For more on health checks, see Docker Compose Resource Limits, Healthchecks, and Restart Policies.
How do I verify that a Docker volume backup is valid?
A backup you have never tested is a liability. Run these checks regularly, not just during disaster recovery.
Check archive integrity:
# For tar.gz files
gzip -t /opt/backups/docker/myapp_data-20260319-030000.tar.gz && echo "OK" || echo "CORRUPT"
Check archive contents:
tar tzf /opt/backups/docker/myapp_data-20260319-030000.tar.gz | wc -l
Compare the file count with a known good backup. A sudden drop in file count indicates a problem.
Test restore to a throwaway volume:
docker volume create test_restore
docker run --rm \
-v test_restore:/target \
-v /opt/backups/docker:/backup:ro \
alpine sh -c "cd /target && tar xzf /backup/myapp_data-20260319-030000.tar.gz"
# Inspect the restored data
docker run --rm -v test_restore:/data:ro alpine ls -la /data/
# Clean up
docker volume rm test_restore
Verify a database dump:
docker compose exec -T postgres pg_restore --list < /opt/backups/docker/mydb-20260319-030000.dump | wc -l
If this returns a number of objects (tables, indexes, sequences), the dump is readable. If it errors, the file is corrupt.
Generate checksums for long-term storage:
sha256sum /opt/backups/docker/*.tar.gz /opt/backups/docker/*.dump > /opt/backups/docker/checksums-$(date +%Y%m%d).sha256
Upload the checksum file alongside the backups. Before restoring, verify:
sha256sum -c /opt/backups/docker/checksums-20260319.sha256
Troubleshooting
"Permission denied" when creating tar archive:
The temporary container runs as root by default, so this usually means the backup directory does not exist or has wrong permissions. Run ls -la /opt/backups/ and verify the docker subdirectory exists with 700 permissions.
pg_dump/pg_restore hangs:
You probably forgot the -T flag in docker compose exec. Without -T, exec tries to allocate a TTY, which blocks when piping output. Use docker compose exec -T.
Backup files are 0 bytes:
The container wrote to a different path than expected. Double-check the volume name in docker volume ls matches what you used in the -v flag. Named volumes are case-sensitive.
rclone sync times out:
Large initial syncs may exceed default timeouts. Add --timeout 30m and --retries 3 to the rclone command.
offen/docker-volume-backup not running on schedule:
Check the BACKUP_CRON_EXPRESSION syntax. The tool uses standard 5-field cron syntax. Run docker compose logs backup and look for parse errors.
Restored database has wrong permissions:
You used a dump without --no-owner. The dump tries to set ownership to the original user, which may not exist on the new server. Re-dump with --no-owner --no-acl or run REASSIGN OWNED BY old_user TO new_user; in psql.
Where to go next
- Docker in Production on a VPS: What Breaks and How to Fix It for the full Docker production setup
- Docker Compose Resource Limits, Healthchecks, and Restart Policies for health checks that confirm services are alive after restore
- Docker Commands Cheatsheet for Docker CLI basics
Copyright 2026 Virtua.Cloud. All rights reserved. This content is original work by the Virtua.Cloud team. Reproduction, republication, or redistribution without written permission is prohibited.