Offsite Backup and Replication with Plakar
Replicate plakar snapshots to a remote server over HTTPS or to S3-compatible object storage. Automate backups and offsite sync with plakar built-in scheduler running as a systemd service on Debian 12 or Ubuntu 24.04.
Why do you need offsite backups?
Local backups protect against accidental deletion and file corruption. They do not protect against disk failure, ransomware, or a compromised server. If your backups sit on the same machine as the data, a single event wipes both. The 3-2-1 backup rule says: three copies, two different media, one offsite. This guide covers the offsite part.
This article picks up where Back Up Your Linux VPS with Plakar left off. You should already have plakar installed on a Debian 12 or Ubuntu 24.04 VPS, a local Kloset store at /var/backups/plakar, and at least one snapshot. If not, start with that guide first.
Two offsite options are built into plakar. Path A replicates snapshots to a second server running plakar server over HTTPS. Path B pushes snapshots to S3-compatible object storage. Both use plakar sync and keep the Kloset encryption intact end to end. Pick one path or both. The automation section at the end applies to either.
How do you replicate plakar snapshots to a remote server?
Run plakar server on a second VPS to expose a Kloset store over HTTP. Put Caddy in front for automatic TLS. From the source machine, build the HTTP integration plugin, add the remote as a named store, and push snapshots with plakar sync to. Data stays encrypted at rest in the Kloset store and in transit over HTTPS. This approach gives you full control over both endpoints.
You need two machines for this section. VM1 is the production server with your existing plakar backups. VM2 is the remote backup server in a different datacenter or provider. VM2 needs a domain name pointing to its IP address for TLS certificates. The examples use backup.example.com as the domain.
All commands in this guide run as root. If you use a sudo user, prepend sudo to each command.
How do you set up plakar server on VM2?
Install plakar on VM2 using the same steps from the pillar article:
apt update
apt install -y curl gnupg2
curl -fsSL https://plakar.io/dist/keys/community-v1.1.0.gpg | gpg --dearmor -o /usr/share/keyrings/plakar.gpg
echo "deb [signed-by=/usr/share/keyrings/plakar.gpg] https://plakar.io/dist/repos/deb/ stable main" | tee /etc/apt/sources.list.d/plakar.list
apt update
apt install -y plakar
plakar version
plakar/v1.0.6
Create a Kloset store on VM2. Generate a strong passphrase and save it in a password manager. This is a different store from VM1, so it gets its own passphrase:
mkdir -p /var/backups/plakar
openssl rand -base64 32
Save the output. You will need this passphrase on both VM2 and VM1. Create the store:
plakar at /var/backups/plakar create
repository passphrase:
Enter the passphrase you just generated. Set restricted permissions on the store directory:
chmod 700 /var/backups/plakar
Store the passphrase in a file so plakar server can start without interactive prompts:
mkdir -p /etc/plakar
cat > /etc/plakar/passphrase <<'EOF'
YOUR_GENERATED_PASSPHRASE_HERE
EOF
chmod 600 /etc/plakar/passphrase
chown root:root /etc/plakar/passphrase
ls -la /etc/plakar/passphrase
-rw------- 1 root root 45 Mar 20 14:00 /etc/plakar/passphrase
Register the store by name:
plakar store add backups \
location=/var/backups/plakar \
passphrase_cmd="cat /etc/plakar/passphrase"
Start plakar server to expose the store over HTTP. Bind it to localhost only. Caddy will handle external connections:
plakar at @backups server -listen 127.0.0.1:9876
The server runs in the foreground and logs requests to stdout. Leave this terminal open. You will set up a systemd service for it later.
By default, plakar server disables delete operations. This prevents remote clients from removing snapshots. For sync to push data, write access is sufficient. Keep deletes disabled unless you have a specific reason to allow them.
How do you put Caddy in front of plakar server for TLS?
Caddy provides automatic HTTPS with Let's Encrypt certificates. It terminates TLS and proxies requests to plakar server on localhost. External clients connect on port 443, and Caddy forwards to port 9876.
Before installing Caddy, make sure your DNS A record for backup.example.com points to VM2's IP address. Caddy needs this to obtain a TLS certificate via the ACME HTTP-01 challenge. Port 80 must also be open for the challenge.
Install Caddy on VM2:
apt install -y debian-keyring debian-archive-keyring apt-transport-https curl
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | tee /etc/apt/sources.list.d/caddy-stable.list
apt update
apt install -y caddy
Replace the default Caddyfile with a reverse proxy configuration. Replace backup.example.com with your actual domain:
cat > /etc/caddy/Caddyfile <<'EOF'
backup.example.com {
reverse_proxy 127.0.0.1:9876
}
EOF
Reload Caddy to apply the configuration:
systemctl reload caddy
systemctl status caddy
● caddy.service - Caddy
Loaded: loaded (/lib/systemd/system/caddy.service; enabled; preset: enabled)
Active: active (running)
Caddy obtains the TLS certificate automatically on the first request. Test from VM1 or any external machine:
curl -I https://backup.example.com
A 200 or 404 response means Caddy is proxying to the plakar server successfully. A TLS error means the certificate is not ready yet.
How do you secure the plakar server with firewall rules?
Open ports 80 (ACME challenge) and 443 (HTTPS). Port 9876 stays bound to 127.0.0.1, so it is not reachable from outside even without an explicit deny rule. For a detailed firewall setup, see How to Set Up a Linux VPS Firewall with UFW and nftables.
With UFW:
ufw allow 80/tcp
ufw allow 443/tcp
ufw status
With nftables, add to your ruleset:
nft add rule inet filter input tcp dport 80 accept
nft add rule inet filter input tcp dport 443 accept
After the certificate is issued, you can close port 80 if you configure Caddy to use the TLS-ALPN-01 challenge instead. The default HTTP-01 challenge requires port 80 to remain open for renewals.
How do you run plakar server as a systemd service on VM2?
Create a systemd unit so plakar server starts on boot and restarts on failure:
cat > /etc/systemd/system/plakar-server.service <<EOF
[Unit]
Description=Plakar Backup Server
After=network.target
[Service]
Type=simple
ExecStart=/usr/bin/plakar at @backups server -listen 127.0.0.1:9876
Restart=on-failure
RestartSec=10
User=root
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.target
EOF
Stop the foreground server if it is still running (Ctrl+C in that terminal), then enable the systemd service. enable --now makes it survive reboots and starts it immediately:
systemctl daemon-reload
systemctl enable --now plakar-server.service
systemctl status plakar-server.service
● plakar-server.service - Plakar Backup Server
Loaded: loaded (/etc/systemd/system/plakar-server.service; enabled; preset: enabled)
Active: active (running)
Monitor the server logs:
journalctl -u plakar-server.service -f
How do you install the HTTP integration on VM1?
Back on VM1. Plakar needs the HTTP integration to connect to remote stores. The HTTP backend is a plugin that ships separately from the core package. Build it from source using the recipe from the plakar plugin server. This requires Go and Make:
apt install -y golang-go make
Download the recipe file and build the plugin:
curl -fsSL https://plugins.plakar.io/kloset/recipe/v1.0.0/http.yaml -o /tmp/http.yaml
plakar pkg build /tmp/http.yaml
The build compiles the HTTP integration from its source repository and produces a .ptar archive in the current directory. Install the built plugin:
plakar pkg add ./http_v1.0.5_linux_amd64.ptar
The exact filename depends on the version and architecture. Check what pkg build produced with ls *.ptar. List installed plugins to confirm:
plakar pkg list
http@v1.0.5
How do you sync snapshots from VM1 to the remote server?
Register the remote server as a named store. Replace backup.example.com with your domain. The passphrase_cmd must return the passphrase of VM2's store, not VM1's. Copy VM2's passphrase to a separate file on VM1:
cat > /etc/plakar/remote-passphrase <<'EOF'
VM2_STORE_PASSPHRASE_HERE
EOF
chmod 600 /etc/plakar/remote-passphrase
chown root:root /etc/plakar/remote-passphrase
Add the remote store:
plakar store add remote \
https://backup.example.com \
passphrase_cmd="cat /etc/plakar/remote-passphrase"
Push all snapshots from your local store to the remote:
plakar at @mybackups sync to @remote
info: sync: synchronization from fs:///var/backups/plakar to https://backup.example.com completed: 3 snapshots synchronized
Plakar reads from @mybackups using its passphrase and writes to @remote using VM2's passphrase. Only missing snapshots and data blocks are transferred. Subsequent syncs skip data that already exists on the remote.
Push a single snapshot by ID:
plakar at @mybackups sync a5bcf13b to @remote
Sync only recent snapshots from the last 7 days:
plakar at @mybackups sync -since 7d to @remote
How do you verify replication on the remote server?
On VM2, list the snapshots to confirm they arrived:
plakar at @backups ls
2026-03-20T10:05:12Z a5bcf13b 1.4 MiB 0s /etc
2026-03-20T10:06:01Z 5fc17459 0 B 0s /home
2026-03-20T10:06:15Z 7ed22fb8 24 B 0s /var/www
The snapshot IDs and timestamps match what exists on VM1. Restore a single file from a synced snapshot to check the data is intact:
plakar at @backups cat a5bcf13b:/etc/hostname
The hostname from VM1 prints to stdout. For a full restore test:
mkdir -p /tmp/restore-test
plakar at @backups restore -to /tmp/restore-test a5bcf13b
ls /tmp/restore-test/etc/
info: a5bcf13b: OK /etc
info: restore: restoration of a5bcf13b:/etc at /tmp/restore-test completed successfully
rm -rf /tmp/restore-test
How do you sync plakar backups to S3-compatible storage?
Add an S3-compatible bucket as a named store, then push snapshots with plakar sync. This works with any provider that speaks the S3 protocol. You bring your own endpoint, bucket, and credentials. No provider-specific setup is required on the plakar side.
How do you install the S3 integration?
The S3 backend is a plugin, same as HTTP. Build it from source:
curl -fsSL https://plugins.plakar.io/kloset/recipe/v1.0.0/s3.yaml -o /tmp/s3.yaml
plakar pkg build /tmp/s3.yaml
plakar pkg add ./s3_v1.0.7_linux_amd64.ptar
If Go and Make are not installed yet (they are if you already did the HTTP plugin), install them first: apt install -y golang-go make.
Both plugins should now be installed:
plakar pkg list
http@v1.0.5
s3@v1.0.7
How do you add an S3 store?
Create a bucket at your S3 provider first. The bucket must already exist. Plakar does not create buckets.
Add the S3 store with plakar store add. Replace the endpoint, bucket name, and credentials with your own:
plakar store add s3 \
s3://s3.eu-west-1.example.com/my-plakar-backups \
access_key=YOUR_ACCESS_KEY \
secret_access_key=YOUR_SECRET_KEY \
use_tls=true \
passphrase_cmd="cat /etc/plakar/passphrase"
The s3:// location format is s3://endpoint/bucket-name. Set use_tls=true for HTTPS connections to the S3 endpoint. The passphrase_cmd returns the passphrase for encrypting data in this new Kloset store.
Initialize a Kloset store inside the bucket:
plakar at @s3 create
Plakar prompts for a passphrase. Enter the same passphrase that passphrase_cmd returns. This creates the Kloset structure (metadata, indices) in the bucket. The bucket itself stores the encrypted data objects.
How do you push snapshots to S3?
Push all snapshots from your local store to S3:
plakar at @mybackups sync to @s3
info: sync: synchronization from fs:///var/backups/plakar to s3://s3.eu-west-1.example.com/my-plakar-backups completed: 3 snapshots synchronized
Push a single snapshot:
plakar at @mybackups sync a5bcf13b to @s3
List snapshots on the S3 store to confirm:
plakar at @s3 ls
2026-03-20T10:05:12Z a5bcf13b 1.4 MiB 0s /etc
2026-03-20T10:06:01Z 5fc17459 0 B 0s /home
2026-03-20T10:06:15Z 7ed22fb8 24 B 0s /var/www
Restore a file from the S3 store:
plakar at @s3 cat a5bcf13b:/etc/hostname
For a full directory restore from S3:
mkdir -p /tmp/restore-test
plakar at @s3 restore -to /tmp/restore-test a5bcf13b:/etc/ssh
ls /tmp/restore-test/etc/ssh/
rm -rf /tmp/restore-test
What S3 permissions does plakar need?
Your S3 credentials need these minimum permissions on the bucket:
s3:GetObjectands3:ListBucketfor reading snapshots and metadatas3:PutObjectfor writing new snapshotss3:DeleteObjectfor lock cleanup during sync operations
Create a dedicated IAM user or service account with a policy scoped to the backup bucket only. Do not reuse credentials that have broader access.
If your provider supports bucket versioning and object lock, enable both. Versioning protects against accidental overwrites. Object lock (in compliance or governance mode) prevents deletion of backup data for a configurable retention period. This is your last line of defense against ransomware that compromises the source server and its S3 credentials.
How do you automate plakar backups and sync?
Set up the plakar scheduler for local backups, then chain offsite sync with a systemd timer. The goal: backups run on a schedule, then snapshots sync to the remote destination automatically. No manual intervention after the initial setup.
How do you configure stores for unattended operation?
If you followed Back Up Your Linux VPS with Plakar, you already have a named store mybackups with a passphrase_cmd. You also need the offsite store (remote or s3) configured with passphrase_cmd as shown in the previous sections. Both stores must work without interactive prompts for automation to function.
Test that both stores are accessible:
plakar at @mybackups ls
plakar at @remote ls
If either command prompts for a passphrase, the passphrase_cmd is not configured correctly. Go back and fix it before continuing.
How do you write a scheduler.yaml for plakar?
The plakar scheduler runs backup tasks at defined intervals. It handles filesystem backups natively. Create a scheduler configuration:
cat > /etc/plakar/scheduler.yaml <<'EOF'
agent:
tasks:
- name: backup etc
repository: "@mybackups"
backup:
path: /etc
interval: 24h
check: true
- name: backup home
repository: "@mybackups"
backup:
path: /home
interval: 24h
- name: backup www
repository: "@mybackups"
backup:
path: /var/www
interval: 24h
EOF
chmod 600 /etc/plakar/scheduler.yaml
Each task specifies a repository, a path, and an interval. The check: true option on the first task runs an integrity verification after each backup. Add it to all tasks if you prefer safety over speed.
The scheduler does not natively support sync as a task type. Use a wrapper script and a systemd timer for the sync step.
How do you sync automatically after backups?
Create a wrapper script that syncs all snapshots to your offsite destination:
cat > /etc/plakar/sync-offsite.sh <<'SCRIPT'
#!/bin/bash
set -euo pipefail
# Change to "s3" if using S3 instead of a remote server
OFFSITE_STORE="remote"
echo "$(date -Iseconds) Starting offsite sync to @${OFFSITE_STORE}"
plakar at @mybackups sync to @${OFFSITE_STORE}
echo "$(date -Iseconds) Offsite sync complete"
SCRIPT
chmod 700 /etc/plakar/sync-offsite.sh
Test the script manually:
/etc/plakar/sync-offsite.sh
2026-03-20T14:00:00+00:00 Starting offsite sync to @remote
info: sync: synchronization from fs:///var/backups/plakar to https://backup.example.com completed: 0 snapshots synchronized
2026-03-20T14:00:01+00:00 Offsite sync complete
If you use both a remote server and S3, add a second sync line to the script:
plakar at @mybackups sync to @remote
plakar at @mybackups sync to @s3
How do you run the plakar scheduler as a systemd service?
Create a systemd unit for the scheduler on VM1. The -foreground flag keeps the scheduler in the foreground so systemd can track the process:
cat > /etc/systemd/system/plakar-scheduler.service <<EOF
[Unit]
Description=Plakar Backup Scheduler
After=network.target
[Service]
Type=simple
ExecStart=/usr/bin/plakar scheduler start -foreground -tasks /etc/plakar/scheduler.yaml
ExecStop=/usr/bin/plakar scheduler stop
Restart=on-failure
RestartSec=30
User=root
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.target
EOF
Create a systemd timer for the offsite sync script. Schedule it to run daily after backups have time to complete. If your scheduler tasks use a 24h interval starting at midnight, set the sync to run a few hours later:
cat > /etc/systemd/system/plakar-sync.service <<EOF
[Unit]
Description=Plakar Offsite Sync
After=network.target
[Service]
Type=oneshot
ExecStart=/etc/plakar/sync-offsite.sh
User=root
StandardOutput=journal
StandardError=journal
EOF
cat > /etc/systemd/system/plakar-sync.timer <<EOF
[Unit]
Description=Run Plakar Offsite Sync Daily
[Timer]
OnCalendar=*-*-* 03:30:00
Persistent=true
[Install]
WantedBy=timers.target
EOF
Persistent=true means if the server was off when the timer should have fired, systemd runs the sync immediately on the next boot.
Enable everything:
systemctl daemon-reload
systemctl enable --now plakar-scheduler.service
systemctl enable --now plakar-sync.timer
info: Plakar scheduler up
The scheduler is running:
systemctl status plakar-scheduler.service
● plakar-scheduler.service - Plakar Backup Scheduler
Loaded: loaded (/etc/systemd/system/plakar-scheduler.service; enabled; preset: enabled)
Active: active (running)
The sync timer is scheduled:
systemctl list-timers plakar-sync.timer
NEXT LEFT LAST PASSED UNIT ACTIVATES
Fri 2026-03-21 03:30:00 UTC 15h left n/a n/a plakar-sync.timer plakar-sync.service
Monitor scheduler and sync logs:
journalctl -u plakar-scheduler.service --since today
journalctl -u plakar-sync.service --since today
If you also run the database backup script from Back Up Your Linux VPS with Plakar, chain the sync timer to run after database backups complete. Add After=plakar-db-backup.service to the [Unit] section of plakar-sync.service, and adjust OnCalendar in the timer to run after the database timer fires.
Something went wrong?
"backend 'http' does not exist" or "backend 's3' does not exist". The integration plugin is not installed. Build it from source using plakar pkg build with the recipe from the plugin server as shown in the HTTP or S3 installation sections above. Run plakar pkg list to see what is installed.
"connection refused" when syncing to remote. Check that plakar server is running on VM2: systemctl status plakar-server.service. Check that Caddy is running: systemctl status caddy. Confirm your DNS A record points to VM2's IP. Test from VM1:
curl -I https://backup.example.com
A connection refused error at this stage means either Caddy is down, DNS is wrong, or port 443 is blocked by a firewall.
"TLS handshake error" from Caddy. Caddy could not get a certificate from Let's Encrypt. Common causes: DNS not propagated yet (wait a few minutes and retry), port 80 blocked (Caddy needs it for the ACME HTTP-01 challenge), or rate limits hit. Check Caddy logs:
journalctl -u caddy --since "10 minutes ago"
"passphrase" prompt during sync. The passphrase_cmd for one of the stores is not configured or returning an empty string. Check both passphrase files:
cat /etc/plakar/passphrase
cat /etc/plakar/remote-passphrase
Both files must contain the correct passphrase for their respective stores. Check file permissions are 600 and owned by root.
"access denied" on S3 sync. Double-check your access key and secret key. Confirm the IAM policy grants s3:GetObject, s3:PutObject, s3:ListBucket, and s3:DeleteObject on the bucket. Some providers require the bucket region in the endpoint URL. Double-check the s3:// URL format.
Scheduler exits immediately. Without the -foreground flag, plakar scheduler start daemonizes and the systemd unit sees it as exited. Make sure your ExecStart line includes -foreground:
ExecStart=/usr/bin/plakar scheduler start -foreground -tasks /etc/plakar/scheduler.yaml
Reload and restart: systemctl daemon-reload && systemctl restart plakar-scheduler.service.
Sync takes a long time on the first run. The initial sync transfers all existing snapshots and their data blocks. Subsequent syncs are incremental and only push new data. If you have many large snapshots, the first sync can take hours depending on upload bandwidth. Run it in a tmux or screen session to avoid interruption.
Snapshots missing on the remote side. Compare snapshot lists:
plakar at @mybackups ls
plakar at @remote ls
If counts differ, run sync again. Sync is idempotent. Existing snapshots are skipped. If a sync was interrupted, re-running it resumes from where it left off.
Scheduler not running after reboot. Check the service is enabled:
systemctl is-enabled plakar-scheduler.service
The output should say enabled. If it says disabled, run systemctl enable plakar-scheduler.service.
Sync timer not firing. Check if the timer is active:
systemctl is-active plakar-sync.timer
If it says inactive, enable it: systemctl enable --now plakar-sync.timer. Check systemctl list-timers to see when the next run is scheduled.
For Docker workloads, consider Docker Volume Backup and Restore on a VPS alongside this setup to back up Docker volumes before plakar snapshots them.
Copyright 2026 Virtua.Cloud. All rights reserved. This content is original work by the Virtua.Cloud team. Reproduction, republication, or redistribution without written permission is prohibited.
Ready to try it yourself?
Deploy your own server in seconds. Linux, Windows, or FreeBSD.
See VPS Plans