Backup and Update n8n in Production (Docker Compose + PostgreSQL)
A day-two operations guide for self-hosted n8n: automated PostgreSQL backups, encryption key protection, off-server copies with rclone, disaster recovery from a fresh VPS, safe Docker Compose updates, rollback, and the 1.x to 2.x migration path.
You got n8n running on your VPS with Docker Compose and PostgreSQL. That was day one. Day two is keeping it alive: automated backups, tested restores, and safe updates.
This guide covers what you need to protect a production n8n instance. We will build a backup script, automate it with cron, copy backups off-server, walk through a disaster recovery on a fresh VPS, and update n8n safely with rollback support. We also cover the 1.x to 2.x migration path.
This guide assumes you followed our n8n Docker Compose installation guide to set up n8n. Your stack runs Docker Compose with PostgreSQL, a non-root deploy user, and a reverse proxy.
What data does n8n store that needs backing up?
n8n stores five components that you must back up together. Missing any one of them can make a restore partial or impossible. The PostgreSQL database holds your workflows, credentials (encrypted), execution history, and user accounts. The encryption key decrypts those credentials. The Docker volume stores custom nodes and binary data. Your .env and docker-compose.yml files define the runtime configuration.
| Component | What it contains | Risk if lost | Backup method |
|---|---|---|---|
| PostgreSQL database | Workflows, encrypted credentials, execution history, users | All data gone | pg_dump (custom format) |
N8N_ENCRYPTION_KEY |
AES-256 key for credential decryption | All saved credentials become permanently unrecoverable | Copy from .env or container config |
Docker volume (.n8n) |
Custom nodes, binary execution data | Custom nodes and file uploads lost | Alpine tar container |
.env file |
Database passwords, encryption key, domain config | Must recreate manually | File copy |
docker-compose.yml |
Service definitions, volume mappings, image tags | Must recreate manually | File copy |
How do you back up the n8n PostgreSQL database with pg_dump?
Use pg_dump with --format=custom to create a compressed, restorable dump of the n8n database. Custom format supports selective restore and parallel processing. Run the dump through the running PostgreSQL container.
docker exec n8n-postgres \
pg_dump -U n8n -d n8n --format=custom \
> /home/deploy/n8n-backups/n8n-db-$(date +%Y%m%d-%H%M%S).dump
What this does: docker exec runs pg_dump inside the PostgreSQL container. -U n8n is the database user. -d n8n is the database name. --format=custom produces a compressed binary dump that pg_restore can selectively restore. The output is redirected to a timestamped file on the host.
Verify the dump is valid:
cat /home/deploy/n8n-backups/n8n-db-*.dump | docker exec -i n8n-postgres pg_restore --list | head -20
Since pg_restore lives inside the PostgreSQL container, pipe the dump through docker exec -i. You should see a list of database objects (tables, sequences, data). If you see an error or empty output, the dump failed.
Back up the Docker volume
The n8n data volume contains custom nodes and binary execution data. Back it up with a temporary Alpine container:
docker run --rm \
-v n8n_n8n_data:/source:ro \
-v /home/deploy/n8n-backups:/backup \
alpine tar czf "/backup/n8n-data-$(date +%Y%m%d-%H%M%S).tar.gz" -C /source .
The volume name n8n_n8n_data is composed of the Docker Compose project name (n8n, from the directory) plus the volume name (n8n_data, from docker-compose.yml). Run docker volume ls if yours differs.
What this does: Mounts the n8n data volume as read-only (:ro) into a disposable Alpine container. The container creates a compressed tarball of the volume contents and writes it to the backup directory. The container is removed after completion (--rm).
Verify the archive:
tar tzf /home/deploy/n8n-backups/n8n-data-*.tar.gz | head -10
You should see file paths from the .n8n directory.
Back up config files
BACKUP_DIR="/home/deploy/n8n-backups"
TIMESTAMP=$(date +%Y%m%d-%H%M%S)
cp /home/deploy/n8n/.env "$BACKUP_DIR/env-$TIMESTAMP.bak"
cp /home/deploy/n8n/docker-compose.yml "$BACKUP_DIR/docker-compose-$TIMESTAMP.yml.bak"
chmod 600 "$BACKUP_DIR/env-$TIMESTAMP.bak"
The .env file contains your N8N_ENCRYPTION_KEY and database password. Restrict permissions to 600 so only the backup user can read it.
Why is the n8n encryption key the most important backup?
n8n encrypts every saved credential (API keys, OAuth tokens, database passwords) with AES-256 using the N8N_ENCRYPTION_KEY. If you lose this key, every credential in your database becomes permanently unrecoverable. There is no reset mechanism. No backdoor. You would need to manually re-enter every credential in every workflow.
The key is either set explicitly in your .env file or auto-generated by n8n on first start and stored inside the container at /home/node/.n8n/config.
Find your encryption key:
If you set it in .env:
grep N8N_ENCRYPTION_KEY /home/deploy/n8n/.env
If n8n auto-generated it (you never set it explicitly):
docker exec n8n cat /home/node/.n8n/config
Look for the encryptionKey field in the JSON output.
Store the key safely:
- Copy the key value to a password manager (Bitwarden, KeePass, 1Password).
- Ensure it exists in your
.envfile asN8N_ENCRYPTION_KEY=<your-key>. - Your backup script already copies
.env, but store the key separately too. If your server burns down and your backups are corrupted, you can still recreate credentials if you have the key.
How do you automate n8n backups with cron?
Combine the database dump, volume backup, and config copy into a single script. Add retention cleanup and a health check ping to catch failures.
Create the backup script:
nano /home/deploy/n8n/backup-n8n.sh
#!/usr/bin/env bash
set -euo pipefail
# --- Configuration ---
BACKUP_DIR="/home/deploy/n8n-backups"
N8N_DIR="/home/deploy/n8n"
COMPOSE_PROJECT="n8n"
DB_CONTAINER="n8n-postgres" # Matches container_name in docker-compose.yml
DB_USER="n8n"
DB_NAME="n8n"
VOLUME_NAME="${COMPOSE_PROJECT}_n8n_data" # docker compose prefixes project name
RETENTION_DAYS=7
RETENTION_WEEKS=4
HEALTHCHECK_URL="" # Set to your healthcheck.io or uptime-kuma URL
TIMESTAMP=$(date +%Y%m%d-%H%M%S)
DAY_OF_WEEK=$(date +%u)
mkdir -p "$BACKUP_DIR/daily" "$BACKUP_DIR/weekly"
# --- PostgreSQL dump ---
echo "[$(date)] Starting PostgreSQL backup..."
docker exec "$DB_CONTAINER" \
pg_dump -U "$DB_USER" -d "$DB_NAME" --format=custom \
> "$BACKUP_DIR/daily/n8n-db-$TIMESTAMP.dump"
# Verify dump is not empty
if [ ! -s "$BACKUP_DIR/daily/n8n-db-$TIMESTAMP.dump" ]; then
echo "[$(date)] ERROR: Database dump is empty!" >&2
exit 1
fi
# --- Docker volume backup ---
echo "[$(date)] Starting volume backup..."
docker run --rm \
-v "${VOLUME_NAME}:/source:ro" \
-v "$BACKUP_DIR/daily:/backup" \
alpine tar czf "/backup/n8n-data-$TIMESTAMP.tar.gz" -C /source .
# --- Config files ---
echo "[$(date)] Backing up config files..."
cp "$N8N_DIR/.env" "$BACKUP_DIR/daily/env-$TIMESTAMP.bak"
cp "$N8N_DIR/docker-compose.yml" "$BACKUP_DIR/daily/docker-compose-$TIMESTAMP.yml.bak"
chmod 600 "$BACKUP_DIR/daily/env-$TIMESTAMP.bak"
# --- Weekly copy (Sundays) ---
if [ "$DAY_OF_WEEK" -eq 7 ]; then
echo "[$(date)] Creating weekly backup copy..."
cp "$BACKUP_DIR/daily/n8n-db-$TIMESTAMP.dump" "$BACKUP_DIR/weekly/"
cp "$BACKUP_DIR/daily/n8n-data-$TIMESTAMP.tar.gz" "$BACKUP_DIR/weekly/"
cp "$BACKUP_DIR/daily/env-$TIMESTAMP.bak" "$BACKUP_DIR/weekly/"
fi
# --- Retention cleanup ---
echo "[$(date)] Cleaning old backups..."
find "$BACKUP_DIR/daily" -type f -mtime +$RETENTION_DAYS -delete
find "$BACKUP_DIR/weekly" -type f -mtime +$((RETENTION_WEEKS * 7)) -delete
# --- Health check ping ---
if [ -n "$HEALTHCHECK_URL" ]; then
curl -fsS --retry 3 "$HEALTHCHECK_URL" > /dev/null
fi
echo "[$(date)] Backup complete."
Set permissions and test:
chmod 750 /home/deploy/n8n/backup-n8n.sh
/home/deploy/n8n/backup-n8n.sh
Verify the backup files were created:
ls -lh /home/deploy/n8n-backups/daily/
Sharp eyes: check the file sizes. A typical n8n database dump is 500 KB to 50 MB depending on execution history. If the dump is 0 bytes, something went wrong.
Schedule with cron
crontab -e
Add this line to run backups daily at 03:00:
0 3 * * * /home/deploy/n8n/backup-n8n.sh >> /home/deploy/n8n-backups/backup.log 2>&1
What this does: Runs the backup script at 3 AM every day. Output and errors are appended to backup.log so you can troubleshoot failures.
Backup failure alerting
The HEALTHCHECK_URL variable in the script supports services like Healthchecks.io or a self-hosted Uptime Kuma instance. These services expect a ping at a regular interval. If the ping stops (because the script failed or the cron didn't run), you get an alert.
Set up a check with a 24-hour period and a 1-hour grace period. If the backup script doesn't ping by 04:00, you get notified.
How do you copy n8n backups off-server with rclone?
Backups on the same server as n8n are not real backups. If the disk fails, you lose both. Use rclone to copy backups to S3-compatible object storage (Wasabi, Backblaze B2, MinIO, or any S3 provider).
Install rclone:
sudo apt install rclone
Configure a remote. This example uses any S3-compatible storage:
rclone config
Follow the interactive prompts to create a remote named n8n-backup. Choose "Amazon S3 Compliant Storage Providers" and enter your endpoint, access key, and secret key.
Verify the remote works:
rclone lsd n8n-backup:
You should see your bucket listed.
Add the sync to your backup script. Insert this before the health check ping line:
# --- Off-server copy ---
echo "[$(date)] Syncing to remote storage..."
rclone sync "$BACKUP_DIR" n8n-backup:your-bucket-name/n8n \
--transfers 4 \
--min-age 1m \
--log-level ERROR
--min-age 1m avoids uploading files that are still being written. --transfers 4 runs four parallel uploads.
How do you verify that n8n backups are valid?
A backup you have never tested is a backup you do not have. Run these checks periodically.
Database dump integrity:
cat /home/deploy/n8n-backups/daily/n8n-db-*.dump | docker exec -i n8n-postgres pg_restore --list > /dev/null 2>&1 && echo "OK" || echo "CORRUPT"
Volume archive integrity:
tar tzf /home/deploy/n8n-backups/daily/n8n-data-*.tar.gz > /dev/null 2>&1 && echo "OK" || echo "CORRUPT"
File checksums for remote verification:
Generate checksums after backup:
sha256sum /home/deploy/n8n-backups/daily/* > /home/deploy/n8n-backups/daily/checksums.sha256
After downloading from remote storage, verify:
sha256sum -c checksums.sha256
Every line should say OK. Any mismatch means the file was corrupted during transfer.
How do you restore n8n from backup on a fresh VPS?
This is the disaster recovery procedure. You lost your server. You have a fresh VPS and your backup files (from remote storage or local copy). Here is how to get n8n running again.
1. Provision a new VPS and install Docker
Set up a VPS with Docker and Docker Compose. Follow our n8n Docker Compose installation guide through the Docker installation step, then come back here.
2. Create the deploy user and project directory
adduser --disabled-password deploy
mkdir -p /home/deploy/n8n /home/deploy/n8n-backups
chown deploy:deploy /home/deploy/n8n /home/deploy/n8n-backups
3. Download your backups
From your remote storage:
su - deploy
rclone copy n8n-backup:your-bucket-name/n8n/daily /home/deploy/n8n-backups/daily --progress
Or from another server via scp:
scp user@old-server:/home/deploy/n8n-backups/daily/* /home/deploy/n8n-backups/daily/
4. Restore config files
cp /home/deploy/n8n-backups/daily/env-*.bak /home/deploy/n8n/.env
cp /home/deploy/n8n-backups/daily/docker-compose-*.yml.bak /home/deploy/n8n/docker-compose.yml
chmod 600 /home/deploy/n8n/.env
Verify the encryption key is present:
grep N8N_ENCRYPTION_KEY /home/deploy/n8n/.env
Sharp eyes: this must return a line with your key. If it is empty or missing, find the key from your password manager and add it manually. Without this key, your credentials are gone.
5. Start PostgreSQL only
cd /home/deploy/n8n
docker compose up -d postgres
Wait for PostgreSQL to initialize:
docker compose logs -f postgres
Look for database system is ready to accept connections.
6. Restore the database
docker cp /home/deploy/n8n-backups/daily/n8n-db-*.dump n8n-postgres:/tmp/n8n.dump
docker exec n8n-postgres pg_restore -U n8n -d n8n --clean --if-exists /tmp/n8n.dump
What this does: Copies the dump file into the container, then pg_restore loads it. --clean drops existing objects before restoring. --if-exists prevents errors if objects do not exist yet.
You may see some warnings about objects not existing. That is normal on a fresh database. Errors about data or constraints are not normal.
7. Restore the Docker volume
Docker Compose already created the n8n_n8n_data volume when you started PostgreSQL in the previous step. Restore into it:
docker run --rm \
-v n8n_n8n_data:/target \
-v /home/deploy/n8n-backups/daily:/backup:ro \
alpine sh -c "tar xzf /backup/n8n-data-*.tar.gz -C /target"
8. Start the full stack
docker compose up -d
9. Verify the restore
docker compose ps
All containers should show running status.
Check n8n logs:
docker compose logs -f n8n
Look for Editor is now accessible via: http://localhost:5678. No encryption key errors.
Open the n8n web UI and verify:
- Workflows exist and match what you had before.
- Credentials work. Open any credential and confirm it shows the stored values (API keys, passwords). If you see empty fields or decryption errors, your encryption key is wrong.
- Run a test workflow. Execute a simple workflow to confirm database connectivity and execution engine work.
Test from outside the server:
curl -s https://your-n8n-domain.com/healthz
A 200 response confirms n8n is reachable and running.
How do you safely update n8n with Docker Compose?
Updating n8n follows a four-step process: back up, check release notes, pull the new image, and verify. Never skip the backup step.
1. Run a backup
/home/deploy/n8n/backup-n8n.sh
This is mandatory. If the update breaks something, you need a restore point.
2. Check release notes for breaking changes
Before pulling the new version, read the n8n release notes. Look for:
- Database migrations (these run automatically on startup but can take time)
- Deprecated nodes that your workflows use
- Changed environment variables
- Minimum PostgreSQL version requirements
3. Note your current version
docker compose exec n8n n8n --version
Write this down. You need it for rollback.
4. Pull and restart
If your docker-compose.yml uses the n8nio/n8n:latest or n8nio/n8n:stable tag:
cd /home/deploy/n8n
docker compose pull
docker compose down
docker compose up -d
If you pin to a specific version (recommended for production), edit docker-compose.yml first:
services:
n8n:
image: n8nio/n8n:2.12.3 # Change to the target version
Then:
docker compose up -d
5. Verify the update
docker compose exec n8n n8n --version
Confirm the version matches what you expected.
Check logs for migration errors:
docker compose logs --tail 50 n8n
Look for Migrations completed and Editor is now accessible via: http://localhost:5678.
Test the API health endpoint:
curl -s https://your-n8n-domain.com/healthz
Run a test workflow through the UI to confirm execution works.
How do you roll back a failed n8n update?
If the update breaks workflows or the UI is unreachable, roll back to the previous version.
1. Stop the broken instance
cd /home/deploy/n8n
docker compose down
2. Set the previous image tag
Edit docker-compose.yml and change the image to your previous version:
services:
n8n:
image: n8nio/n8n:2.11.2 # Your previous working version
3. Restore the database if migrations ran
If n8n ran database migrations during the failed update, the schema may have changed. Restore from your pre-update backup:
docker compose up -d postgres
docker cp /home/deploy/n8n-backups/daily/n8n-db-*.dump n8n-postgres:/tmp/n8n.dump
docker exec n8n-postgres pg_restore -U n8n -d n8n --clean --if-exists /tmp/n8n.dump
4. Start the old version
docker compose up -d
Verify:
docker compose exec n8n n8n --version
docker compose logs --tail 20 n8n
This is why we back up before every update. Without the pre-update dump, you cannot safely downgrade after a migration.
How do you migrate n8n from version 1.x to 2.x?
n8n 2.0 shipped in December 2025 with breaking changes. If you are still running 1.x, migrate soon. Version 1.x received security fixes only for three months after the 2.0 release, and that window has now closed.
Key breaking changes in n8n 2.0
| Change | 1.x behavior | 2.x behavior | Action required |
|---|---|---|---|
| MySQL/MariaDB support | Supported | Dropped entirely | Migrate to PostgreSQL before upgrading |
| Task runners | Optional | Enabled by default, separate Docker image | Use n8nio/runners image for external mode |
| Code node env vars | Accessible | Blocked by default | Set N8N_BLOCK_ENV_ACCESS_IN_NODE=false if needed |
| Start node | Functional | Removed | Replace with specific trigger nodes |
| Save vs. Publish | Save = deploy | Save and Publish are separate actions | Update team workflows |
| Python Code node | Pyodide-based | Native Python via task runners | Use external mode task runners |
ExecuteCommand and LocalFileTrigger nodes |
Enabled | Disabled by default | Enable explicitly if needed |
--tunnel CLI option |
Available | Removed | Use a reverse proxy instead |
N8N_CONFIG_FILES |
Supported | Removed | Use environment variables directly |
Step 1: Run the migration report
The migration report tool is available in n8n 1.121.0 and later. It identifies workflow-level and instance-level issues before you upgrade.
In the n8n UI, go to Settings > Migration Report. This is only visible to global admins.
The report has two tabs:
- Workflow Issues: Lists workflows using deprecated nodes, removed features, or changed behaviors.
- Instance Issues: Flags environment variables and configuration that need updating.
Fix every issue the report identifies before proceeding.
Step 2: Update to the latest 1.x first
Before jumping to 2.x, update to the latest 1.x release. This ensures all intermediate database migrations run:
docker compose pull # With image set to n8nio/n8n:1-latest or latest 1.x tag
docker compose down && docker compose up -d
Step 3: Back up everything
/home/deploy/n8n/backup-n8n.sh
This is your rollback point. If 2.x breaks your setup, you can restore this backup and stay on 1.x.
Step 4: Update to 2.x
Edit docker-compose.yml:
services:
n8n:
image: n8nio/n8n:stable # Or a specific 2.x version like 2.12.3
docker compose pull
docker compose down
docker compose up -d
Step 5: Verify the migration
docker compose logs --tail 100 n8n
Look for completed migrations and no errors.
Test workflows that the migration report flagged. Open credentials to confirm decryption works. Run a few workflows end-to-end.
If anything is broken, roll back using the procedure in the previous section with your 1.x backup and the 1.x image tag.
Troubleshooting
pg_dump: error: connection to server failed
The PostgreSQL container is not running. Start it with docker compose up -d postgres and wait for it to be ready before running backups.
Credentials show as empty after restore
Your N8N_ENCRYPTION_KEY does not match the one used when credentials were saved. Check your .env file. Compare it with the key in your password manager.
docker exec fails with "no such container"
Container names depend on your project directory name and compose file. Run docker ps to find the actual container name. Adjust the DB_CONTAINER variable in the backup script.
Backup script runs but healthcheck never pings
Check that HEALTHCHECK_URL is set in the script. Test the URL manually: curl -fsS "your-healthcheck-url". Firewall rules may be blocking outbound HTTPS.
n8n fails to start after update with migration error
Check logs with docker compose logs n8n. If the error mentions a specific migration, search the n8n community forum for that error. Roll back if needed.
rclone sync fails with authentication error
Run rclone config reconnect n8n-backup: to refresh credentials. Verify your access key and secret are correct.
Where logs live:
# n8n application logs
docker compose logs -f n8n
# PostgreSQL logs
docker compose logs -f postgres
# Backup script log
tail -f /home/deploy/n8n-backups/backup.log
# Cron execution log
grep CRON /var/log/syslog | tail -20
Copyright 2026 Virtua.Cloud. All rights reserved. This content is original work by the Virtua.Cloud team. Reproduction, republication, or redistribution without written permission is prohibited.
Ready to try it yourself?
Deploy your own server in seconds. Linux, Windows, or FreeBSD.
See VPS Plans