WireGuard and Tailscale VPN on a Linux VPS
Set up WireGuard from scratch or deploy Tailscale for managed VPN access on Ubuntu 24.04 and Debian 12, with DNS leak prevention, PreSharedKey hardening, and a neutral comparison including Headscale.
This guide walks through two approaches to VPN access on a Linux VPS: WireGuard (manual, full control) and Tailscale (managed, batteries included). Both use the WireGuard protocol underneath. The difference is who manages the keys, routing, and coordination.
Pick the section that matches your situation, or read all three parts to make an informed choice.
Prerequisites:
- A VPS running Ubuntu 24.04 or Debian 12 (works on a Virtua Cloud VPS)
- SSH access with a non-root sudo user (SSH Hardening on a Linux VPS: Complete sshd_config Security Guide)
- A firewall configured and active (How to Set Up a Linux VPS Firewall with UFW and nftables)
- A local machine (Linux, macOS, or Windows) as the VPN client
Why should you VPN into your Linux VPS?
Every service you expose on a public IP is a target. SSH brute-force bots find new servers within minutes. Database ports, admin panels, and API endpoints get scanned constantly. A VPN lets you keep management interfaces off the public internet entirely.
The principle is straightforward: bind sensitive services to the VPN interface address instead of 0.0.0.0. Only devices holding valid VPN keys can reach them. The public internet never sees these ports exist.
Common use cases:
- Admin access without public SSH. Bind
sshdto the WireGuard IP. No more port 22 exposed to the world. Fail2ban becomes optional when there is nothing to brute-force. - Tunneling to private AI inference APIs. Running Ollama or a private LLM endpoint on your VPS? Put it behind the VPN. No API gateway needed, no token auth to manage for internal access.
- Multi-cloud mesh networking. Connect VPS nodes across providers (Virtua, Hetzner, OVH) into a private network. Services communicate over encrypted tunnels without public endpoints.
- Encrypting traffic on untrusted networks. Route all traffic through the VPS when working from coffee shops or hotel WiFi. Your ISP and the local network operator see only encrypted WireGuard packets.
The threat model matters. If you only access your VPS from a fixed office IP, firewall rules may suffice. But if you work from multiple locations, use multiple devices, or manage a team with different access levels, a VPN is the cleaner solution.
How do you set up WireGuard on Ubuntu 24.04 or Debian 12?
WireGuard is a VPN protocol built into the Linux kernel since version 5.6. It uses Curve25519 for key exchange, ChaCha20 for encryption, and has roughly 4,000 lines of code (compared to OpenVPN's 100,000+). Install it with apt, generate keypairs, write a config file, and start the tunnel.
Install WireGuard on the server:
sudo apt update && sudo apt install wireguard wireguard-tools -y
The wireguard package provides the kernel module. The wireguard-tools package provides wg and wg-quick userspace utilities.
How do you generate WireGuard keys and configure the server?
WireGuard uses asymmetric keypairs (one per peer) plus an optional preshared key for post-quantum defense. Generate a server keypair, a client keypair, and a preshared key shared between them. The preshared key adds a layer of symmetric encryption on top of WireGuard's Curve25519 exchange. If a future quantum computer breaks the asymmetric key exchange, the symmetric preshared key still protects the tunnel.
On the server, generate keys with a restrictive umask so files are created with 600 permissions:
umask 077
wg genkey | sudo tee /etc/wireguard/server_private.key | wg pubkey | sudo tee /etc/wireguard/server_public.key
wg genpsk | sudo tee /etc/wireguard/psk.key
ls -la /etc/wireguard/
-rw------- 1 root root 45 Mar 19 10:01 psk.key
-rw------- 1 root root 45 Mar 19 10:01 server_private.key
-rw------- 1 root root 45 Mar 19 10:01 server_public.key
All three files are 600 (owner read/write only). The private key and preshared key must never leave the server.
On your local machine (the client), generate its own keypair:
umask 077
wg genkey | tee client_private.key | wg pubkey > client_public.key
You need the client's public key on the server, and the server's public key on the client. The preshared key goes on both sides. Never transfer private keys.
Enable IP forwarding
The server needs to route traffic for VPN clients. Enable IPv4 and IPv6 forwarding:
echo 'net.ipv4.ip_forward = 1' | sudo tee /etc/sysctl.d/99-wireguard.conf
echo 'net.ipv6.conf.all.forwarding = 1' | sudo tee -a /etc/sysctl.d/99-wireguard.conf
sudo sysctl -p /etc/sysctl.d/99-wireguard.conf
net.ipv4.ip_forward = 1
net.ipv6.conf.all.forwarding = 1
Using a file in /etc/sysctl.d/ is preferred over editing /etc/sysctl.conf directly. It survives package updates and makes it clear which settings belong to which service.
Server configuration file
Identify your server's public-facing network interface:
ip route show default
default via 203.0.113.1 dev eth0 proto static metric 100
In this example, the interface is eth0. Yours might be ens3, ens18, or something else. Use whatever appears after dev.
Create the server config:
sudo nano /etc/wireguard/wg0.conf
[Interface]
Address = 10.66.66.1/24, fd42:42:42::1/64
ListenPort = 51820
PrivateKey = <contents of /etc/wireguard/server_private.key>
PostUp = nft add table ip wireguard; nft add chain ip wireguard forward { type filter hook forward priority 0 \; policy accept \; }; nft add rule ip wireguard forward iifname "wg0" accept; nft add table ip nat; nft add chain ip nat postrouting { type nat hook postrouting priority 100 \; }; nft add rule ip nat postrouting oifname "eth0" masquerade
PostDown = nft delete table ip wireguard; nft delete table ip nat
[Peer]
PublicKey = <contents of client_public.key>
PresharedKey = <contents of /etc/wireguard/psk.key>
AllowedIPs = 10.66.66.2/32, fd42:42:42::2/128
Replace eth0 in the PostUp/PostDown lines with your actual interface name.
What each section does:
Address: The VPN IP assigned to this server. The/24and/64define the VPN subnet size.ListenPort: The UDP port WireGuard listens on. 51820 is conventional.PostUp/PostDown: nftables rules that enable NAT masquerading. When a VPN client sends traffic to the internet, the server rewrites the source IP to its own public IP. This is what makes full-tunnel VPN work.AllowedIPsin the[Peer]section: Which VPN IPs this client can use./32means exactly one IP. This prevents clients from spoofing other VPN addresses.
If you use ufw instead of raw nftables, replace PostUp/PostDown with:
PostUp = ufw route allow in on wg0 out on eth0; iptables -t nat -I POSTROUTING -o eth0 -j MASQUERADE
PostDown = ufw route delete allow in on wg0 out on eth0; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
Lock down the config file. It contains your private key:
sudo chmod 600 /etc/wireguard/wg0.conf
Open the firewall and start the tunnel
Allow WireGuard's UDP port through the firewall:
sudo ufw allow 51820/udp
Or with nftables directly:
sudo nft add rule inet filter input udp dport 51820 accept
Start the tunnel. The enable directive makes it survive reboots. The --now flag starts it immediately:
sudo systemctl enable --now wg-quick@wg0
sudo systemctl status wg-quick@wg0
● wg-quick@wg0.service - WireGuard via wg-quick(8) for wg0
Loaded: loaded (/usr/lib/systemd/system/wg-quick@.service; enabled; preset: enabled)
Active: active (exited) since ...
The status shows active (exited) because wg-quick sets up the interface and exits. The kernel module handles the tunnel from that point. Check the running tunnel:
sudo wg show
interface: wg0
public key: aB3dEfGhIjKlMnOpQrStUvWxYz1234567890abc=
private key: (hidden)
listening port: 51820
peer: xY9zAbCdEfGhIjKlMnOpQrStUvWxYz1234567890=
preshared key: (hidden)
allowed ips: 10.66.66.2/32, fd42:42:42::2/128
The (hidden) output means WireGuard is protecting sensitive key material. If you see latest handshake for a peer, that peer has connected successfully.
Client configuration
On your local machine, create the WireGuard config:
sudo nano /etc/wireguard/wg0.conf
[Interface]
Address = 10.66.66.2/24, fd42:42:42::2/64
PrivateKey = <contents of client_private.key>
DNS = 10.66.66.1
[Peer]
PublicKey = <contents of server_public.key>
PresharedKey = <contents of psk.key>
AllowedIPs = 0.0.0.0/0, ::/0
Endpoint = YOUR_SERVER_PUBLIC_IP:51820
PersistentKeepalive = 25
Key settings explained:
AllowedIPs = 0.0.0.0/0, ::/0routes all traffic through the VPN (full tunnel). For split tunneling where only VPN subnet traffic goes through the tunnel, useAllowedIPs = 10.66.66.0/24, fd42:42:42::/64instead.Endpointis the server's public IP and port. The client needs this to initiate the connection. The server does not need the client's endpoint because it learns it from incoming packets.PersistentKeepalive = 25sends a keepalive packet every 25 seconds. This keeps NAT mappings alive so the server can reach the client. Without it, the connection drops after NAT timeout (typically 30-120 seconds of inactivity).DNS = 10.66.66.1points DNS queries to the server's VPN address. This prevents DNS leaks (explained in the next section).
On macOS and Windows, use the WireGuard app. Import the config file or paste it in.
Connect from the client:
sudo wg-quick up wg0
ping 10.66.66.1
PING 10.66.66.1 (10.66.66.1) 56(84) bytes of data.
64 bytes from 10.66.66.1: icmp_seq=1 ttl=64 time=4.23 ms
64 bytes from 10.66.66.1: icmp_seq=2 ttl=64 time=3.98 ms
From your local machine, confirm your public IP changed (if using full tunnel):
curl -s https://ifconfig.me
This should return the VPS's public IP, not your home IP.
Adding more clients
For each additional client, generate a new keypair and a new preshared key (one psk per peer pair). Add a [Peer] block to the server's wg0.conf:
[Peer]
PublicKey = <new client public key>
PresharedKey = <new preshared key>
AllowedIPs = 10.66.66.3/32, fd42:42:42::3/128
Increment the IP address for each client. After editing, reload without dropping existing connections:
sudo wg syncconf wg0 <(sudo wg-quick strip wg0)
This applies the new peer config without restarting the interface. Existing connections stay up.
MTU tuning
WireGuard adds 60 bytes of overhead per packet (28 bytes for the outer IPv4 and UDP headers, 32 bytes for the WireGuard header). If your VPS has a standard 1500-byte MTU, set the WireGuard MTU to 1420 in the [Interface] section:
MTU = 1420
Set this on both server and client. Mismatched MTUs cause packet fragmentation and throughput drops that are hard to diagnose. If you run WireGuard over a link that already has reduced MTU (PPPoE, VXLAN), subtract accordingly.
How do you prevent DNS leaks with WireGuard?
DNS leaks happen when your system sends DNS queries outside the VPN tunnel, exposing which domains you visit to your ISP or local network. This defeats the privacy benefit of routing traffic through the VPN. The fix: run a local DNS resolver on the VPN server and point the client's DNS to it through the tunnel.
Install Unbound on the server:
sudo apt install unbound -y
Create a config file that makes Unbound listen on the WireGuard interface:
sudo nano /etc/unbound/unbound.conf.d/wireguard.conf
server:
interface: 10.66.66.1
interface: fd42:42:42::1
interface: 127.0.0.1
access-control: 10.66.66.0/24 allow
access-control: fd42:42:42::/64 allow
access-control: 127.0.0.0/8 allow
do-ip6: yes
hide-identity: yes
hide-version: yes
harden-glue: yes
harden-dnssec-stripped: yes
use-caps-for-id: yes
prefetch: yes
The hide-identity and hide-version directives prevent Unbound from disclosing its software version in DNS responses. Version disclosure helps attackers target known vulnerabilities. The harden-* options enforce DNSSEC validation and prevent cache poisoning. The use-caps-for-id option adds 0x20 encoding to DNS queries, a lightweight defense against spoofed responses.
On Ubuntu 24.04, systemd-resolved listens on port 53 by default and conflicts with Unbound. Disable it:
sudo systemctl disable --now systemd-resolved
sudo rm /etc/resolv.conf
echo "nameserver 127.0.0.1" | sudo tee /etc/resolv.conf
Now start Unbound:
sudo systemctl enable --now unbound
sudo systemctl status unbound
● unbound.service - Unbound DNS server
Loaded: loaded (/usr/lib/systemd/system/unbound.service; enabled; preset: enabled)
Active: active (running) since ...
Test that Unbound resolves from the WireGuard interface:
dig @10.66.66.1 example.com +short
You should get one or more IP addresses back. The exact IPs depend on the domain's current DNS records.
The client config already sets DNS = 10.66.66.1. When the WireGuard tunnel is active, all DNS queries go through the encrypted tunnel to Unbound. No queries leak to your ISP.
To confirm from the client side, connect to the VPN and check:
resolvectl status wg0
The DNS server should show 10.66.66.1 only. You can also use dnsleaktest.com.
| Test | Before VPN | After VPN (with Unbound) |
|---|---|---|
| DNS server shown | ISP resolver (e.g. 192.168.1.1) | 10.66.66.1 (VPS Unbound) |
| IP seen by test site | Home/office IP | VPS public IP |
| DNS queries visible to ISP | Yes | No |
For a deeper look at DNS security including DNSSEC and DNS-over-HTTPS, see.
Kill switch: what happens when the tunnel drops?
If the WireGuard tunnel drops, traffic flows unencrypted over your default route. Your real IP and DNS queries are exposed. A kill switch prevents this by blocking all non-VPN traffic at the firewall level.
On the client, add nftables rules that only allow traffic through the WireGuard interface and the encrypted connection to the server endpoint:
sudo nft add table inet killswitch
sudo nft add chain inet killswitch output { type filter hook output priority 0 \; policy drop \; }
sudo nft add rule inet killswitch output oifname "wg0" accept
sudo nft add rule inet killswitch output ip daddr YOUR_SERVER_PUBLIC_IP udp dport 51820 accept
sudo nft add rule inet killswitch output oifname "lo" accept
With this in place, if wg0 goes down, all outbound traffic is dropped. No DNS leaks, no cleartext packets. The only allowed traffic is the encrypted WireGuard handshake to the server.
Remove the kill switch to restore normal routing:
sudo nft delete table inet killswitch
For a persistent kill switch that activates automatically, save these rules to a file and load them with a systemd service that starts before wg-quick@wg0.
How do you set up Tailscale on a Linux VPS?
Tailscale is a mesh VPN service built on WireGuard. It handles key distribution, NAT traversal, and peer discovery through a coordination server. You install the client, authenticate with your identity provider, and your devices can reach each other. No manual key exchange, no port forwarding, no firewall rules to open.
The trade-off: Tailscale's coordination server is a third-party service. It sees your device metadata (IPs, hostnames, which devices are online) but never your traffic, which flows peer-to-peer over WireGuard.
How do you install Tailscale on Ubuntu 24.04 or Debian 12?
Add the official Tailscale repository and install the package. This avoids curl | sh and lets you verify updates through apt.
For Ubuntu 24.04 (Noble):
sudo mkdir -p --mode=0755 /usr/share/keyrings
curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/noble.noarmor.gpg | sudo tee /usr/share/keyrings/tailscale-archive-keyring.gpg >/dev/null
curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/noble.tailscale-keyring.list | sudo tee /etc/apt/sources.list.d/tailscale.list
sudo apt-get update && sudo apt-get install tailscale -y
For Debian 12 (Bookworm):
sudo mkdir -p --mode=0755 /usr/share/keyrings
curl -fsSL https://pkgs.tailscale.com/stable/debian/bookworm.noarmor.gpg | sudo tee /usr/share/keyrings/tailscale-archive-keyring.gpg >/dev/null
curl -fsSL https://pkgs.tailscale.com/stable/debian/bookworm.tailscale-keyring.list | sudo tee /etc/apt/sources.list.d/tailscale.list
sudo apt-get update && sudo apt-get install tailscale -y
Start Tailscale and authenticate:
sudo tailscale up
This prints an authentication URL. Open it in your browser and sign in with your identity provider (Google, Microsoft, GitHub, etc.). Once authenticated, the VPS joins your tailnet.
sudo systemctl status tailscaled
● tailscaled.service - Tailscale node agent
Loaded: loaded (/usr/lib/systemd/system/tailscaled.service; enabled; preset: enabled)
Active: active (running) since ...
Check the assigned Tailscale IP:
tailscale ip -4
100.64.0.1
Every device on your tailnet gets a stable 100.x.x.x (CGNAT range) address. This IP persists across reboots and reconnections. Use it to reach your VPS from any other device on the same tailnet.
Check connectivity to your other devices:
tailscale status
100.64.0.1 vps-frankfurt youruser@ linux -
100.64.0.2 laptop youruser@ macOS active; direct 203.0.113.50:41641
The direct designation means traffic flows peer-to-peer over WireGuard. If it says relay, traffic is going through a DERP relay server, which adds latency. DERP relaying happens when both peers are behind restrictive NATs that prevent direct connections.
Disable key expiry for servers
Tailscale keys expire after 180 days by default. When keys expire, the device drops offline until someone re-authenticates it. For a VPS that must stay connected, you have two options.
Option 1: Disable expiry in the admin console. Go to the Machines page, find your VPS, click the menu, and select "Disable key expiry."
Option 2 (recommended): Use a tagged auth key. Tagged devices have key expiry automatically disabled. Generate a reusable, tagged auth key in the admin console under Settings > Keys, then join with:
sudo tailscale up --auth-key=tskey-auth-XXXXX --advertise-tags=tag:server
Tags also work with ACLs (covered below), making them the better choice for server infrastructure.
How do you configure a Tailscale exit node on your VPS?
An exit node routes all internet traffic from your devices through the VPS. Your outbound traffic appears to originate from the VPS's IP address. This works like a traditional VPN: encrypt everything, exit from your VPS location.
Enable IP forwarding on the VPS:
echo 'net.ipv4.ip_forward = 1' | sudo tee /etc/sysctl.d/99-tailscale.conf
echo 'net.ipv6.conf.all.forwarding = 1' | sudo tee -a /etc/sysctl.d/99-tailscale.conf
sudo sysctl -p /etc/sysctl.d/99-tailscale.conf
Advertise the VPS as an exit node:
sudo tailscale set --advertise-exit-node
Approve the exit node in the admin console. Find the VPS, click the menu, go to "Edit route settings," and enable "Use as exit node."
From your client device, start using the exit node:
sudo tailscale set --exit-node=<vps-tailscale-ip>
To keep local LAN access (printers, NAS, other local devices) while routing internet traffic through the exit node:
sudo tailscale set --exit-node=<vps-tailscale-ip> --exit-node-allow-lan-access=true
To stop routing through the exit node:
sudo tailscale set --exit-node=
Confirm the exit node is working by checking your public IP from the client:
curl -s https://ifconfig.me
This should return the VPS's public IP.
Subnet routing
Subnet routing exposes a private network behind the VPS to your tailnet. This is useful when your VPS can reach a private database subnet or internal services that do not run Tailscale themselves.
sudo tailscale set --advertise-routes=192.168.1.0/24
Approve the route in the admin console the same way you approve exit nodes. Once approved, all tailnet devices can reach 192.168.1.0/24 through the VPS as a gateway, without installing Tailscale on every host in that subnet.
You can advertise multiple routes by comma-separating them:
sudo tailscale set --advertise-routes=192.168.1.0/24,10.0.0.0/16
For heavily used subnet routers with many concurrent flows, Tailscale recommends kernel mode (Linux only) over userspace mode. On Linux, kernel mode is the default. Check with:
tailscale debug prefs | grep RouteAll
How do you set up Tailscale ACLs for VPS access?
By default, all devices on a tailnet can reach each other on all ports. For production, restrict this with ACLs in the tailnet policy file.
A minimal policy that limits VPS access to an admin group:
{
"groups": {
"group:admins": ["alice@example.com", "bob@example.com"]
},
"tagOwners": {
"tag:server": ["group:admins"]
},
"acls": [
{
"action": "accept",
"src": ["group:admins"],
"dst": ["tag:server:*"]
}
]
}
This says: only group:admins members can access devices tagged tag:server. All other connections are denied by default. Edit this in the admin console under Access Controls.
Tag the VPS when joining the tailnet:
sudo tailscale up --advertise-tags=tag:server
A more granular policy might restrict by port:
{
"groups": {
"group:admins": ["alice@example.com"],
"group:developers": ["bob@example.com", "charlie@example.com"]
},
"tagOwners": {
"tag:server": ["group:admins"]
},
"acls": [
{
"action": "accept",
"src": ["group:admins"],
"dst": ["tag:server:*"]
},
{
"action": "accept",
"src": ["group:developers"],
"dst": ["tag:server:22,80,443"]
}
]
}
Developers get SSH (22) and web (80, 443) access. Admins get full access. Nobody else can reach the server. Tailscale enforces these rules at the client level, so traffic is blocked before it even reaches the server.
MagicDNS
Tailscale includes MagicDNS, which gives every device a hostname resolvable within the tailnet. Instead of remembering 100.64.0.1, you SSH to vps-frankfurt. Enable it in the admin console under DNS settings. No Unbound or manual DNS configuration needed.
What is Headscale and when should you use it?
Headscale is an open-source, self-hosted implementation of the Tailscale coordination server. It uses standard Tailscale clients but runs the control plane on your own infrastructure. No device limits, no telemetry, no dependency on Tailscale's SaaS. Compatible with all official Tailscale clients (Linux, macOS, Windows, iOS, Android).
The architecture:
┌─────────────┐ ┌──────────────────┐ ┌─────────────┐
│ Client A │──────▶│ Headscale │◀──────│ Client B │
│ (tailscale) │ │ (your server) │ │ (tailscale) │
└──────┬───────┘ └──────────────────┘ └──────┬───────┘
│ │
└────────────── WireGuard tunnel ──────────────────┘
(direct, peer-to-peer)
Headscale handles key exchange, device registration, and ACL enforcement. The actual VPN traffic flows directly between peers using WireGuard. Headscale never sees your data traffic, only coordination metadata.
What Headscale supports today: device registration (CLI and OIDC), ACLs, subnet routers, exit nodes, MagicDNS, pre-auth keys, and tagging. It covers the core Tailscale feature set.
What it does not support: Tailscale Funnel, Serve, network flow logs, and some beta features. The admin interface is CLI-only. Community-built web UIs exist (headscale-ui) but do not cover every feature.
When Headscale makes sense:
- Regulations prohibit third-party coordination servers. GDPR data processing agreements may be required for Tailscale's SaaS. Headscale eliminates that dependency.
- You need more devices than Tailscale's free tier allows.
- You want full audit control over device registration and key management.
- You are building infrastructure where depending on an external SaaS is a single point of failure.
When it does not:
- You need Tailscale's global DERP relay network for reliable NAT traversal. Headscale can use DERP, but you must run your own relays or accept using Tailscale's public ones.
- You want a polished admin UI. Headscale is CLI-first.
- You have a small team and zero appetite for maintaining another service.
A full Headscale setup tutorial is planned for a future article. For now, see the Headscale documentation.
WireGuard vs Tailscale vs Headscale: which one fits your use case?
Use WireGuard if you need full control, minimum latency, air-gapped networks, or regulatory requirements that prohibit third-party coordination servers. Use Tailscale if you manage multiple devices, need NAT traversal without port forwarding, or want ACLs without manual config. For teams wanting Tailscale features with self-hosted control, consider Headscale.
| Dimension | WireGuard | Tailscale | Headscale |
|---|---|---|---|
| Setup time | 10-15 min per peer | 2 min per device | 30-60 min (server + clients) |
| Key management | Manual (generate, distribute, rotate yourself) | Automatic (coordination server handles it) | Automatic (your coordination server) |
| NAT traversal | None. Requires port forwarding or public IP on at least one side | Built-in (DERP relays + STUN) | Partial (bring your own DERP or use public relays) |
| Peer discovery | Manual config per peer | Automatic mesh | Automatic mesh |
| ACLs | Firewall rules (nftables/iptables) | Policy file in admin console | Policy file on your server |
| Max devices | Unlimited | Free tier has limits (see pricing page) | Unlimited |
| Multi-cloud mesh | Config on every node, N*(N-1)/2 peer entries | Join tailnet, mesh auto-forms | Same as Tailscale, self-hosted control |
| Third-party dependency | None | Tailscale Inc. (coordination only) | None |
| GDPR / compliance | Full control, no third-party data processor | Tailscale processes device metadata | Full control |
| DNS | Manual (Unbound, etc.) | MagicDNS (automatic, per-device hostnames) | MagicDNS (with configuration) |
| Latency | Minimal (kernel-level WireGuard) | Minimal when direct; +20-50ms when DERP relayed | Same as Tailscale |
| Team onboarding | Share config files and keys manually | Send invite link, SSO login | Register via CLI or OIDC |
| Post-quantum defense | PreSharedKey (manual setup) | Not user-configurable | Not user-configurable |
Decision shortcuts
Solo developer, one VPS: WireGuard. Simplest path. Zero dependencies. Lowest possible latency. You generate two keys and write two config files.
Team of 3-15, multiple devices: Tailscale. The coordination server saves hours of key management. ACLs are cleaner than maintaining nftables rules per peer. NAT traversal works without network changes.
Regulated environment or self-hosting mandate: Headscale if you want the Tailscale UX without the third-party dependency. Raw WireGuard if you want zero moving parts and your team can manage configs.
Multi-cloud mesh (5+ nodes across providers): Tailscale or Headscale. With WireGuard, 10 nodes require 45 peer entries total. At 20 nodes, that is 190. The config management overhead grows quadratically.
AI inference endpoint access: If you tunnel to a GPU VPS from your laptop, either works. Tailscale is faster to set up. WireGuard adds no measurable overhead for latency-sensitive inference calls. For multi-GPU setups across providers, Tailscale's mesh is worth the trade-off.
Troubleshooting
WireGuard tunnel up but no traffic flows:
Check IP forwarding:
sysctl net.ipv4.ip_forward
Should return net.ipv4.ip_forward = 1. Check your masquerade rules are loaded:
sudo nft list ruleset | grep masquerade
If empty, the PostUp rules did not execute. Restart with sudo systemctl restart wg-quick@wg0 and check journalctl -u wg-quick@wg0 for errors.
WireGuard handshake never completes:
sudo wg show
If latest handshake never appears for a peer, UDP port 51820 is blocked somewhere. Confirm the server is listening:
sudo ss -ulnp | grep 51820
UNCONN 0 0 0.0.0.0:51820 0.0.0.0:*
WireGuard uses the kernel module directly, so ss may not show a process name for the socket.
Then test from the client side. If your client is behind a corporate firewall, UDP 51820 may be blocked. Some networks only allow TCP 443.
Tailscale shows "offline" in admin console:
sudo systemctl status tailscaled
If running but offline, re-authenticate:
sudo tailscale up --force-reauth
If tailscaled is not running, check for port conflicts with other VPN software.
DNS still leaking after Unbound setup:
Check Unbound is listening on the WireGuard IP:
ss -ulnp | grep :53
Look for 10.66.66.1:53 in the output. If Unbound only binds to 127.0.0.1, revisit the interface: lines in /etc/unbound/unbound.conf.d/wireguard.conf.
On Ubuntu, also check that systemd-resolved is actually stopped:
sudo systemctl is-active systemd-resolved
If it returns active, it is competing with Unbound for port 53.
Service logs:
journalctl -u wg-quick@wg0 -f
journalctl -u tailscaled -f
journalctl -u unbound -f
Linux VPS Security: Threats, Layers, and Hardening Guide SSH Hardening on a Linux VPS: Complete sshd_config Security Guide How to Set Up a Linux VPS Firewall with UFW and nftables
Copyright 2026 Virtua.Cloud. All rights reserved. This content is original work by the Virtua.Cloud team. Reproduction, republication, or redistribution without written permission is prohibited.
Ready to try it yourself?
Deploy your own server in seconds. Linux, Windows, or FreeBSD. →