Self-Host Uptime Kuma and Beszel on a VPS with Docker Compose
Deploy Uptime Kuma for external uptime monitoring and Beszel for lightweight server metrics on a single VPS. Docker Compose setup with notifications, alerts, status pages, and security hardening.
Your app is deployed. Users are signing up. But you have no idea if it goes down at 3 AM, or if the disk fills up while you sleep.
This guide deploys two monitoring tools on a single VPS using Docker Compose:
- Uptime Kuma (84k+ GitHub stars) monitors external availability: HTTP endpoints, TCP ports, DNS records, SSL certificates.
- Beszel (~20k stars) monitors internal server health: CPU, RAM, disk, network, and per-container Docker stats.
Together they cost about 150-180 MB of RAM. A Prometheus + Grafana + node_exporter stack doing the same job needs 800+ MB.
Prerequisites: A VPS running Debian 12 or Ubuntu 24.04 with Docker and Docker Compose installed. A reverse proxy (Caddy or Nginx) handling TLS. A domain name with DNS pointed at your server. This guide uses Caddy for reverse proxy examples. Docker in Production on a VPS: What Breaks and How to Fix It
What is the difference between Uptime Kuma and Beszel?
Uptime Kuma monitors external availability. It tells you whether your websites, APIs, and services are reachable from outside your server. Beszel monitors internal server health: CPU usage, RAM, disk space, network bandwidth, and per-container Docker stats. A web server can report low CPU and plenty of free memory while being completely unreachable due to a misconfigured firewall or an expired TLS certificate. You need both tools.
| Feature | Uptime Kuma | Beszel |
|---|---|---|
| What it monitors | HTTP, TCP, DNS, ping, SSL expiry, push/heartbeat | CPU, RAM, disk, network, temperature, Docker containers |
| Architecture | Single container, web UI | Hub + agent (one agent per monitored server) |
| Database | SQLite (default) or MariaDB | PocketBase (embedded SQLite) |
| Notification channels | 90+ (email, Telegram, Discord, Slack, webhooks, etc.) | Email (SMTP via PocketBase) |
| Status pages | Yes, public-facing with custom domain | No |
| RAM usage | ~80-120 MB | Hub: ~10-50 MB, Agent: ~25 MB |
| GitHub stars | 84k+ | ~20k |
Neither tool replaces the other. Uptime Kuma catches external failures. Beszel catches resource exhaustion before it causes external failures.
How much RAM does the monitoring stack use?
Uptime Kuma v2.x uses approximately 80-120 MB of RAM depending on monitor count. Beszel hub adds 10-50 MB and each agent uses about 25 MB. The combined stack runs comfortably on a 1 GB VPS, using roughly 150-180 MB total. For comparison, Prometheus + Grafana + node_exporter together need 800+ MB just at idle.
| Stack | RAM at idle | Setup time | Best for |
|---|---|---|---|
| Uptime Kuma + Beszel | ~150-180 MB | 30 minutes | Small to medium self-hosted setups |
| Prometheus + Grafana + node_exporter | ~800 MB+ | 2-4 hours | Large-scale infrastructure with custom queries |
| Netdata | ~300-400 MB | 15 minutes | Real-time metrics, single server |
How do I install Uptime Kuma with Docker Compose?
Uptime Kuma runs as a single container serving its web UI on port 3001. The Compose file below pins to the 2 major version tag, binds to localhost only, sets resource limits, and adds a health check.
Create the project directory:
mkdir -p /opt/uptime-kuma && cd /opt/uptime-kuma
Create the Compose file:
# /opt/uptime-kuma/compose.yaml
services:
uptime-kuma:
image: louislam/uptime-kuma:2
container_name: uptime-kuma
restart: unless-stopped
ports:
- "127.0.0.1:3001:3001"
volumes:
- ./data:/app/data
environment:
- TZ=Europe/Berlin
deploy:
resources:
limits:
memory: 256m
cpus: "0.5"
healthcheck:
test: ["CMD", "node", "/app/extra/healthcheck.js"]
interval: 30s
timeout: 10s
retries: 3
start_period: 15s
The 127.0.0.1:3001:3001 binding ensures the container only listens on localhost. Without the 127.0.0.1 prefix, Docker publishes the port on all interfaces, bypassing your firewall. Fix Docker Bypassing UFW: 4 Tested Solutions for Your VPS
Start the container:
docker compose up -d
[+] Running 1/1
✔ Container uptime-kuma Started
docker compose ps
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
uptime-kuma louislam/uptime-kuma:2 "/usr/bin/dumb-init …" uptime-kuma 10 seconds ago Up 9 seconds (healthy) 127.0.0.1:3001->3001/tcp
The (healthy) status means the built-in health check passed. If you see (starting), wait 15 seconds for the start_period to complete.
Reverse proxy with Caddy
Add an entry to your Caddyfile:
# /etc/caddy/Caddyfile (append)
status.example.com {
reverse_proxy localhost:3001
}
Reload Caddy:
systemctl reload caddy
Caddy automatically obtains a TLS certificate from Let's Encrypt. Open https://status.example.com in your browser. Uptime Kuma prompts you to create an admin account on first access.
Enable 2FA immediately
After creating your admin account, go to Settings > Security > Two-Factor Authentication and enable it. Uptime Kuma's dashboard gives read access to your entire infrastructure topology. Anyone who compromises the login sees every monitored endpoint. Set up 2FA before adding any monitors.
How do I set up Beszel to monitor my VPS?
Beszel uses a hub-agent architecture. The hub is the web dashboard that stores data and displays metrics. The agent runs on each server you want to monitor and streams metrics back to the hub. When both run on the same VPS, they communicate over a Unix socket instead of the network.
Create the project directory:
mkdir -p /opt/beszel && cd /opt/beszel
Create the Compose file:
# /opt/beszel/compose.yaml
services:
beszel-hub:
image: henrygd/beszel:0.18
container_name: beszel-hub
restart: unless-stopped
ports:
- "127.0.0.1:8090:8090"
environment:
- APP_URL=https://beszel.example.com
volumes:
- ./beszel_data:/beszel_data
- ./beszel_socket:/beszel_socket
deploy:
resources:
limits:
memory: 128m
cpus: "0.25"
beszel-agent:
image: henrygd/beszel-agent:0.18
container_name: beszel-agent
restart: unless-stopped
network_mode: host
volumes:
- ./beszel_agent_data:/var/lib/beszel-agent
- ./beszel_socket:/beszel_socket
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
- LISTEN=/beszel_socket/beszel.sock
- KEY=${BESZEL_KEY}
deploy:
resources:
limits:
memory: 64m
cpus: "0.15"
About this configuration:
- The hub image is pinned to
0.18, which includes the CVE-2026-27734 fix (v0.18.4). Pinning to a minor prevents unexpected breaking changes while still receiving patch updates. - The agent uses
network_mode: hostso it can read the host's network interface stats. This is required for accurate bandwidth monitoring. - The agent and hub share a
beszel_socketvolume for Unix socket communication. This avoids exposing port 45876 on the network when both run on the same server. - The Docker socket is mounted read-only (
:ro). More on this in the security section below.
Docker socket security
Mounting /var/run/docker.sock gives the agent access to the Docker API. Even with :ro, this is effectively root-equivalent access because the Docker API can create privileged containers, read environment variables from any container, and access volumes. Docker Security Hardening: Rootless Mode, Seccomp, AppArmor on a VPS
Beszel needs the socket to collect per-container CPU, memory, and network stats. If you do not need container monitoring, remove the Docker socket mount entirely.
CVE-2026-27734 (fixed in v0.18.4) demonstrated this risk: an authenticated Beszel user could traverse the Docker API via unsanitized container IDs, reaching arbitrary endpoints like /version or /containers/json. The fix sanitizes all user input before constructing Docker API URLs. Make sure you run v0.18.4 or later. The 0.18 tag in the Compose file above resolves to v0.18.4 (latest patch as of March 2026).
Generate the agent key
Start the hub first:
cd /opt/beszel
docker compose up -d beszel-hub
Open the hub at https://beszel.example.com (after configuring your reverse proxy, next section). Create an admin account. Go to Add System in the dashboard. The hub displays a public key. Copy it.
Create a .env file for the agent:
# /opt/beszel/.env
BESZEL_KEY="ssh-ed25519 AAAA... (paste the key from the hub UI)"
chmod 600 /opt/beszel/.env
Now start the agent:
docker compose up -d beszel-agent
Back in the hub UI, add the system using the socket path /beszel_socket/beszel.sock. Within seconds, CPU, RAM, disk, and Docker container metrics appear.
Reverse proxy for Beszel
Add to your Caddyfile:
# /etc/caddy/Caddyfile (append)
beszel.example.com {
reverse_proxy localhost:8090
}
systemctl reload caddy
How do I configure notifications in Uptime Kuma?
Uptime Kuma supports over 90 notification channels. The three most common for self-hosters are email (SMTP), Telegram, and Discord.
SMTP email notifications
Go to Settings > Notifications > Setup Notification. Select Email (SMTP) as the type.
| Field | Value |
|---|---|
| Hostname | Your SMTP server (e.g., smtp.example.com) |
| Port | 587 (STARTTLS) or 465 (implicit TLS) |
| Security | STARTTLS or TLS |
| Username | Your SMTP username |
| Password | Your SMTP password |
| From Email | monitoring@example.com |
| To Email | you@example.com |
Click Test to send a test notification before saving. Uptime Kuma sends the test immediately. If it fails, check your SMTP credentials and firewall rules (port 587 outbound must be open).
Telegram bot notifications
- Message @BotFather on Telegram and create a new bot with
/newbot. - Copy the bot token (format:
123456:ABC-DEF1234ghIkl-zyx57W2v1u123ew11). - Start a chat with your bot and send any message.
- Get your chat ID: open
https://api.telegram.org/bot<YOUR_TOKEN>/getUpdatesin your browser. Thechat.idfield in the response is your chat ID. - In Uptime Kuma, add a Telegram notification. Paste the bot token and chat ID. Click Test.
Discord webhook notifications
- In your Discord server, go to Server Settings > Integrations > Webhooks > New Webhook.
- Name it (e.g., "Uptime Kuma"), choose a channel, and copy the webhook URL.
- In Uptime Kuma, add a Discord notification and paste the webhook URL. Click Test.
Set the default notification to apply to all new monitors by checking Default Enabled. This way every monitor you create inherits the notification channel without manual configuration.
What Uptime Kuma monitor types should I set up?
Uptime Kuma supports many monitor types. Here are the four most useful for a self-hosted stack:
- HTTP(s) - Checks a URL, optionally matches a keyword in the response body. Use for web apps, APIs, and dashboards. Set the keyword to a string that only appears when the app is healthy (e.g., your app name in the HTML title).
- TCP - Connects to a host:port. Use for databases (PostgreSQL on 5432), mail servers (SMTP on 587), or any service without an HTTP endpoint.
- DNS - Resolves a hostname and checks the result matches an expected IP. Catches DNS hijacking or misconfigured records.
- Push / Heartbeat - Uptime Kuma generates a URL. Your cron job or backup script calls it on success. If the URL is not called within the interval, Uptime Kuma fires an alert. This is the only way to monitor scripts that have no listening port.
Push monitor example for cron jobs
Create a Push monitor in Uptime Kuma. Set the heartbeat interval to your cron frequency plus a grace period. Copy the push URL.
Add the curl call to the end of your backup script:
#!/bin/bash
# /opt/scripts/backup.sh
pg_dump mydb | gzip > /backups/mydb-$(date +%F).sql.gz
# Signal success to Uptime Kuma
curl -fsS -o /dev/null "https://status.example.com/api/push/abc123?status=up&msg=backup-ok"
If the script fails before reaching the curl line, or if cron does not run, Uptime Kuma marks the monitor as down after the interval expires.
How do I set up alerts in Beszel for CPU and disk usage?
Beszel alerts notify you when server metrics cross a threshold. Click the bell icon next to any system in the dashboard to configure alerts.
Recommended thresholds for a small VPS (2-4 vCPU, 4-8 GB RAM):
| Metric | Warning | Critical | Why |
|---|---|---|---|
| CPU | > 70% for 5 min | > 90% for 2 min | Sustained high CPU means runaway processes or undersized instance |
| RAM | > 80% for 5 min | > 90% for 2 min | Linux starts heavy swapping above 85%, killing performance |
| Disk | > 80% | > 90% | Docker images, logs, and databases grow silently. At 100% services crash |
| Bandwidth | > 80% of plan limit | > 95% | Prevents overage charges or throttling |
These thresholds are intentionally lower than enterprise defaults. On a small VPS you have less headroom. A spike from 70% to 100% CPU takes seconds, not minutes.
Configure SMTP for Beszel alerts
Beszel uses PocketBase as its backend. SMTP is configured through the PocketBase admin panel:
- Go to
https://beszel.example.com/_/(the PocketBase admin URL, note the underscore). - Log in with the admin credentials you created during setup.
- Go to Settings > Mail settings.
- Enable Use SMTP mail server.
- Enter your SMTP host, port, username, and password.
- Set the sender address.
- Click Save and Send test email.
How do I create a public status page with Uptime Kuma?
Uptime Kuma can serve public status pages showing your services' availability. These are useful for communicating uptime to users without exposing your monitoring dashboard.
- Go to Status Pages in the left sidebar.
- Click New Status Page. Choose a name and slug (e.g.,
status). - Add groups (e.g., "Web Services", "APIs", "Infrastructure").
- Drag monitors into each group.
- Publish the page. It is accessible at
https://status.example.com/status/<slug>.
Custom domain for the status page
If you want https://status.example.com to serve the status page directly, set the status page as the default in Uptime Kuma settings. The root path then shows the public page while the dashboard remains at /dashboard.
Status pages do not require authentication. Do not put monitors in a status page group if revealing the endpoint's existence is a security concern.
Incident management
When a service goes down, Uptime Kuma automatically shows it as degraded on the status page. You can also create manual incidents:
- Go to Status Pages, select your page, click Create Incident.
- Write a title and description (e.g., "Database maintenance, estimated 15 minutes").
- Set the style to info, warning, danger, or primary.
- Publish. The incident banner appears at the top of the public status page.
Resolve the incident when done. Uptime Kuma keeps a history of past incidents so your users can see your operational track record.
How do I monitor my monitoring stack from outside?
If your VPS goes down, both Uptime Kuma and Beszel go down with it. You learn about the outage the same time your users do. The fix: an external watchdog that monitors your Uptime Kuma instance from a different location.
Option 1: UptimeRobot (free tier)
- Create a free account at UptimeRobot.
- Add a new monitor: type HTTP(s), URL
https://status.example.com/api/status-page/heartbeat/<slug>. - Set the check interval to 5 minutes.
- Configure email or Telegram notifications.
The /api/status-page/heartbeat/<slug> endpoint returns a JSON payload with the status. UptimeRobot checks it and alerts you if your Uptime Kuma instance becomes unreachable.
Option 2: Healthchecks.io (free tier)
Healthchecks.io works with the push model. Create a check, copy the ping URL, and add a cron job on your VPS:
# /etc/cron.d/monitoring-heartbeat
*/5 * * * * root curl -fsS --retry 3 -o /dev/null https://hc-ping.com/your-uuid-here
If the cron ping stops arriving (because your server is down), Healthchecks.io sends you an alert. This covers the scenario where your entire VPS becomes unreachable.
Option 3: monitor from a second VPS
If you run multiple servers, install Uptime Kuma on a different VPS and have each instance monitor the other. This is the most reliable approach because you control both endpoints and there is no dependency on a third-party free tier.
Security hardening
Firewall rules
If you run the Beszel agent in standalone mode on a remote server (not using the Unix socket method), it listens on port 45876. Only the hub needs to reach this port:
ufw allow from <hub-ip-address> to any port 45876 proto tcp comment "Beszel agent"
ufw status numbered
Status: active
To Action From
-- ------ ----
[ 1] 22/tcp ALLOW IN Anywhere
[ 2] 80/tcp ALLOW IN Anywhere
[ 3] 443/tcp ALLOW IN Anywhere
[ 4] 45876/tcp ALLOW IN <hub-ip-address>
Do not open port 45876 to the world. The agent exposes system metrics without authentication on that port. It relies on the hub's SSH key for verification, but network-level restriction adds defense in depth.
For the single-VPS setup in this guide, port 45876 is not needed at all because the hub and agent communicate over a Unix socket.
Uptime Kuma: disable password authentication for API
If you only access Uptime Kuma through its web UI, disable API access via Settings > Security > API Key. Fewer exposed endpoints, fewer things to patch.
Version hiding
Uptime Kuma and Beszel both expose version information in their web UI by default. Your reverse proxy should not add to this. In your Caddyfile, Caddy already omits Server headers by default. If you use Nginx instead:
server_tokens off;
Version disclosure helps attackers target known vulnerabilities. Keep it minimal.
How do I back up Uptime Kuma and Beszel data?
Both tools use SQLite-based databases. SQLite files cannot be safely copied while the application writes to them. Use the proper backup methods.
Uptime Kuma backup
Uptime Kuma stores everything in /app/data (mapped to ./data in the Compose file). The built-in backup exports a JSON file:
- Go to Settings > Backup.
- Click Export. Save the JSON file off-server.
For automated backups, stop the container briefly or use SQLite's online backup:
sqlite3 /opt/uptime-kuma/data/kuma.db ".backup '/opt/backups/kuma-$(date +%F).db'"
Beszel backup
Beszel uses PocketBase. Back up the data directory:
sqlite3 /opt/beszel/beszel_data/data.db ".backup '/opt/backups/beszel-$(date +%F).db'"
Store backups off-server. A monitoring stack that loses its history when the disk dies is not monitoring anything. Docker Volume Backup and Restore on a VPS
How do I update Uptime Kuma and Beszel safely?
Pin to the minor version, not latest. This prevents breaking changes from landing without your knowledge.
# Update Uptime Kuma
cd /opt/uptime-kuma
docker compose pull
docker compose up -d
[+] Pulling 1/1
✔ uptime-kuma Pulled
[+] Running 1/1
✔ Container uptime-kuma Started
docker compose ps
Check the STATUS column shows (healthy). If the new version causes issues, pin the previous version in compose.yaml and recreate:
# In compose.yaml, change the image tag to the previous version:
# image: louislam/uptime-kuma:2.2.1
docker compose up -d
The same process applies to Beszel. Always back up before updating.
Image pinning strategy
The louislam/uptime-kuma:2 tag tracks the latest 2.x release. This is convenient but means docker compose pull can jump from 2.2.1 to 2.3.0 without warning. For production, pin to a specific minor:
image: louislam/uptime-kuma:2.2
Check release notes before pulling. Uptime Kuma publishes releases on GitHub. Beszel does the same at their releases page.
Subscribe to both repositories' release notifications (Watch > Custom > Releases on GitHub) so you know when security patches drop.
Docker Compose Resource Limits, Healthchecks, and Restart Policies
Troubleshooting
Uptime Kuma shows (unhealthy) in docker compose ps:
docker compose logs uptime-kuma --tail 50
Common causes: corrupted SQLite database (restore from backup), port conflict (another service on 3001), or insufficient memory (increase the resource limit).
Beszel agent not connecting to hub:
docker compose logs beszel-agent --tail 50
Check that the KEY in .env matches the key shown in the hub's Add System dialog. If using Unix sockets, verify the shared volume mount path matches on both services.
Beszel shows no Docker container stats:
The Docker socket mount is missing or the Docker socket path is wrong. Check:
ls -la /var/run/docker.sock
srw-rw---- 1 root docker 0 Mar 20 10:00 /var/run/docker.sock
The socket must exist and the container must have read access. The :ro mount in the Compose file handles this.
Notifications not arriving:
For SMTP: check that port 587 (or 465) outbound is not blocked by your hosting provider. Some providers block outbound SMTP by default. Test with:
nc -zv smtp.example.com 587
Connection to smtp.example.com 587 port [tcp/submission] succeeded!
For Telegram: verify the bot token and chat ID. The chat ID must be a number, not the bot username.
High memory usage:
Monitor count matters. Uptime Kuma v2.x uses about 100 MB at idle. Each HTTP monitor adds connection state. If you exceed 100 monitors on a 256 MB memory limit, increase the limit or split across instances.
Check actual usage:
docker stats --no-stream uptime-kuma beszel-hub beszel-agent
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
abc123 uptime-kuma 0.15% 99MiB / 256MiB 38.67% 1.2kB / 2kB 0B / 0B 8
def456 beszel-hub 0.08% 10MiB / 128MiB 7.81% 1kB / 1.5kB 0B / 745kB 8
ghi789 beszel-agent 0.05% 22MiB / 64MiB 34.38% 0B / 0B 0B / 0B 5
Ready to try it yourself?
Deploy your monitoring stack on a Virtua Cloud VPS. →