Anycast DNS with BIRD2 and BGP: Multi-Location Setup

13 min read·Matthieu·High AvailabilitynftablesDNSAnycastBIND9BIRD2BGP|

Deploy anycast DNS across multiple VPS locations using BIRD2 and BIND9. Covers BGP route announcement, zone synchronization with TSIG, health-check failover, and nftables hardening.

Anycast DNS assigns the same IP address to DNS servers in multiple locations. Each server announces the IP via BGP. Routers direct queries to the nearest server based on network topology. If one server fails and withdraws its BGP route, queries automatically reach the next closest server. This gives you low-latency DNS resolution and automatic failover without client-side changes.

This guide deploys a production anycast DNS setup across two Virtua VPS nodes using BIRD2 as the routing daemon and BIND9 as the authoritative DNS server. Security is woven into each step, not added at the end.

What you will build:

  • Two VPS nodes in different locations, each announcing the same /24 prefix via BGP
  • BIND9 authoritative DNS listening on the anycast IP
  • Zone synchronization between nodes using TSIG-authenticated transfers
  • A health-check script that withdraws BGP routes when DNS fails
  • nftables firewall rules locking down each node

What are the prerequisites for anycast DNS?

Before starting, you need these resources already provisioned. If you have not set up BGP on a VPS before, read BIRD2 BGP Configuration on a Linux VPS first. If you need to create ROA records, see RPKI ROA Setup for BGP: Create ROAs, Validate Routes in BIRD2 and FRR.

Requirement Details
ASN Your own ASN (e.g. AS212345), registered with a RIR
IPv4 prefix At least a /24 (e.g. 198.51.100.0/24). Prefixes longer than /24 are filtered by most transit providers.
IPv6 prefix At least a /48 (e.g. 2001:db8:abcd::/48)
ROA records Created in your RIR portal for both prefixes, authorizing your ASN
VPS nodes 2+ Virtua VPS in different locations, each with a BGP session to the upstream router
Upstream BGP info Neighbor IP and ASN for each node, provided by Virtua
Debian 12 This guide uses Debian Bookworm. Adapt package names for other distributions.

Throughout this guide, we use these example values. Replace them with your own:

Variable Value
Your ASN 212345
Anycast IPv4 198.51.100.1
Anycast prefix 198.51.100.0/24
Anycast IPv6 2001:db8:abcd::1
Anycast IPv6 prefix 2001:db8:abcd::/48
Node A (Frankfurt) primary IP 203.0.113.10
Node A upstream neighbor 169.254.169.1 AS 64496
Node B (Amsterdam) primary IP 203.0.113.20
Node B upstream neighbor 169.254.169.1 AS 64496
DNS zone example.com

What does the network topology look like?

                    ┌─────────────┐
                    │   Internet   │
                    └──────┬───────┘
                           │
              ┌────────────┴────────────┐
              │                         │
     ┌────────┴────────┐      ┌────────┴────────┐
     │  Virtua Router   │      │  Virtua Router   │
     │  Frankfurt        │      │  Amsterdam        │
     │  AS 64496         │      │  AS 64496         │
     └────────┬────────┘      └────────┬────────┘
              │ eBGP                    │ eBGP
     ┌────────┴────────┐      ┌────────┴────────┐
     │  Node A (VPS)    │      │  Node B (VPS)    │
     │  AS 212345        │      │  AS 212345        │
     │  BIRD2 + BIND9    │      │  BIRD2 + BIND9    │
     │  anycast0:         │      │  anycast0:         │
     │  198.51.100.1/24  │      │  198.51.100.1/24  │
     │  2001:db8:abcd::1 │      │  2001:db8:abcd::1 │
     └──────────────────┘      └──────────────────┘
              │                         │
              │  AXFR + TSIG (via       │
              │  primary IPs)           │
              └─────────────────────────┘

Both nodes announce 198.51.100.0/24 and 2001:db8:abcd::/48 via BGP. Clients reach whichever node is topologically closest. Zone transfers happen over the primary (unicast) IPs, not the anycast address.

How do I create the anycast loopback interface?

Each node needs the anycast IP assigned to a local interface. A dummy interface works best because it is always up and has no physical dependency. Configure it with systemd-networkd so it persists across reboots.

Create the netdev file:

cat > /etc/systemd/network/10-anycast.netdev << 'EOF'
[NetDev]
Name=anycast0
Kind=dummy
EOF

Create the network file to assign the anycast IPs. Use the full prefix length (/24 for IPv4, /48 for IPv6) so the direct protocol in BIRD2 sees connected routes matching the prefixes you want to announce:

cat > /etc/systemd/network/10-anycast.network << 'EOF'
[Match]
Name=anycast0

[Network]
Address=198.51.100.1/24
Address=2001:db8:abcd::1/48
EOF

Enable and start systemd-networkd. If your VPS uses ifupdown for the primary interface (check /etc/network/interfaces), do not use systemctl restart systemd-networkd as it can disrupt the primary network. Starting it fresh is safe because systemd-networkd only manages interfaces that have config files in /etc/systemd/network/:

systemctl enable --now systemd-networkd
ip addr show anycast0

Expected output:

3: anycast0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether a2:b4:c6:d8:e0:f2 brd ff:ff:ff:ff:ff:ff
    inet 198.51.100.1/24 brd 198.51.100.255 scope global anycast0
       valid_lft forever preferred_lft forever
    inet6 2001:db8:abcd::1/48 scope global
       valid_lft forever preferred_lft forever

The interface state shows UNKNOWN, which is normal for dummy interfaces. It means the link layer is always up. Both the IPv4 and IPv6 addresses should appear.

Repeat this on every anycast node. The configuration is identical.

What BIRD2 configuration is needed for anycast route announcement?

BIRD2 announces the anycast prefix to your upstream router via eBGP. The key design: BIRD2 only exports routes learned from the anycast0 interface via the direct protocol. When the interface goes down (or the health-check script takes it down), BIRD2 automatically withdraws the route.

Install BIRD2:

apt update && apt install -y bird2

Debian 12 ships BIRD 2.0.12. For newer features (BFD improvements, filter enhancements), add the upstream BIRD repository.

Node A configuration (/etc/bird/bird.conf)

# /etc/bird/bird.conf -- Node A (Frankfurt)

log syslog all;
router id 203.0.113.10;

# Scan interfaces every 10 seconds
protocol device {
    scan time 10;
}

# Learn routes from the anycast dummy interface
protocol direct anycast {
    ipv4;
    ipv6;
    interface "anycast0";
}

# Define the prefixes we are authorized to announce
define ANYCAST_V4 = [ 198.51.100.0/24 ];
define ANYCAST_V6 = [ 2001:db8:abcd::/48 ];

# Export filter: only announce our anycast prefixes
filter export_anycast_v4 {
    if net ~ ANYCAST_V4 then accept;
    reject;
}

filter export_anycast_v6 {
    if net ~ ANYCAST_V6 then accept;
    reject;
}

# IPv4 BGP session to upstream
protocol bgp upstream4 {
    description "Virtua Frankfurt upstream IPv4";
    local 203.0.113.10 as 212345;
    neighbor 169.254.169.1 as 64496;
    password "your-bgp-md5-secret";
    hold time 90;
    keepalive time 30;
    graceful restart on;

    ipv4 {
        import none;
        export filter export_anycast_v4;
    };
}

# IPv6 BGP session to upstream
protocol bgp upstream6 {
    description "Virtua Frankfurt upstream IPv6";
    local 2001:db8:1::10 as 212345;
    neighbor 2001:db8:1::1 as 64496;
    password "your-bgp-md5-secret";
    hold time 90;
    keepalive time 30;
    graceful restart on;

    ipv6 {
        import none;
        export filter export_anycast_v6;
    };
}

Key points:

  • protocol direct anycast learns routes only from the anycast0 interface. No other interfaces leak into the routing table.
  • Export filters restrict announcements to exactly your /24 and /48. This prevents accidental route leaks.
  • password enables TCP MD5 authentication (RFC 2385) on the BGP session. Get the shared secret from your upstream provider.
  • import none means this node does not accept any routes from the upstream. Anycast nodes only announce; they use the VPS default route for outbound traffic.
  • graceful restart on reduces route flap during BIRD2 restarts.

Node B configuration

Copy the same file to Node B. Change only:

router id 203.0.113.20;

# In protocol bgp upstream4:
    local 203.0.113.20 as 212345;
    # neighbor IP and AS may differ per location, use the values Virtua provides

Start BIRD2

Enable and start BIRD2:

systemctl enable --now bird

enable makes it survive reboots. --now starts it immediately.

Check status:

systemctl status bird

Check BGP session state:

birdc show protocols

Expected output:

BIRD 2.0.12 ready.
Name       Proto      Table      State  Since         Info
device1    Device     ---        up     2026-03-19
anycast    Direct     ---        up     2026-03-19
upstream4  BGP        ---        up     2026-03-19    Established
upstream6  BGP        ---        up     2026-03-19    Established

Both BGP sessions should show Established. If you see Active or Connect instead, check the neighbor IP, ASN, and password. Look at logs with journalctl -u bird -f.

Check the exported routes:

birdc show route export upstream4

Expected:

BIRD 2.0.12 ready.
Table master4:
198.51.100.0/24      unicast [anycast 2026-03-19] * (240)
	dev anycast0

The route comes from the anycast direct protocol and is exported to upstream4. Repeat on Node B.

How do I configure BIND9 to listen on the anycast IP?

BIND9 runs as an authoritative-only DNS server on each node, listening on the anycast IP. Clients query the anycast IP; BGP ensures they reach the closest node.

Install BIND9:

apt update && apt install -y bind9 bind9-utils

Generate a TSIG key for zone transfers

Zone transfers between nodes must be authenticated. Generate a TSIG key using tsig-keygen:

tsig-keygen -a hmac-sha256 anycast-transfer > /etc/bind/anycast-transfer.key

The output looks like this:

cat /etc/bind/anycast-transfer.key
key "anycast-transfer" {
    algorithm hmac-sha256;
    secret "base64-encoded-secret-here";
};

Copy this exact file to all nodes. The key name and secret must match everywhere.

Set restrictive permissions:

chown root:bind /etc/bind/anycast-transfer.key
chmod 640 /etc/bind/anycast-transfer.key

The file should show root:bind ownership and 640 permissions:

ls -la /etc/bind/anycast-transfer.key
-rw-r----- 1 root bind 113 Mar 19 12:00 /etc/bind/anycast-transfer.key

Node A configuration (primary)

Edit /etc/bind/named.conf.options:

options {
    directory "/var/cache/bind";

    // Listen only on the anycast IP and localhost
    listen-on { 198.51.100.1; 127.0.0.1; };
    listen-on-v6 { 2001:db8:abcd::1; ::1; };

    // Authoritative only, no recursion
    recursion no;
    allow-recursion { none; };

    // Hide version to avoid targeted exploits
    version "not disclosed";

    // Disable zone transfers by default
    allow-transfer { none; };

    // Rate limiting to mitigate DNS amplification
    rate-limit {
        responses-per-second 10;
        window 5;
    };

    dnssec-validation auto;
};

Include the TSIG key. Edit /etc/bind/named.conf.local:

include "/etc/bind/anycast-transfer.key";

// Allow transfers only to Node B using TSIG
acl "secondaries" {
    key "anycast-transfer";
};

zone "example.com" {
    type primary;
    file "/var/lib/bind/db.example.com";
    allow-transfer { secondaries; };
    also-notify { 203.0.113.20; };
    notify yes;
};

Create the zone file /var/lib/bind/db.example.com:

$TTL 300
@   IN  SOA ns1.example.com. admin.example.com. (
        2026031901  ; Serial (YYYYMMDDNN)
        3600        ; Refresh (1 hour)
        900         ; Retry (15 minutes)
        604800      ; Expire (1 week)
        300         ; Negative cache TTL (5 minutes)
)

@       IN  NS      ns1.example.com.
@       IN  NS      ns2.example.com.

; NS records point to the anycast IP, same IP, different names
ns1     IN  A       198.51.100.1
ns1     IN  AAAA    2001:db8:abcd::1
ns2     IN  A       198.51.100.1
ns2     IN  AAAA    2001:db8:abcd::1

; Your records
@       IN  A       198.51.100.10
@       IN  AAAA    2001:db8:abcd::10
www     IN  CNAME   example.com.
mail    IN  A       198.51.100.25
@       IN  MX  10  mail.example.com.

Set ownership:

chown bind:bind /var/lib/bind/db.example.com
chmod 640 /var/lib/bind/db.example.com

Node B configuration (secondary)

On Node B, /etc/bind/named.conf.options is identical to Node A.

/etc/bind/named.conf.local differs:

include "/etc/bind/anycast-transfer.key";

server 203.0.113.10 {
    keys { "anycast-transfer"; };
};

zone "example.com" {
    type secondary;
    file "/var/lib/bind/db.example.com";
    primaries { 203.0.113.10 key "anycast-transfer"; };
};

The server statement tells BIND9 to authenticate all communication with Node A using the TSIG key. Zone transfers are automatic once this is configured.

Start BIND9

Enable and start on both nodes:

systemctl enable --now named

Check status:

systemctl status named

Test DNS resolution against the anycast IP:

dig @198.51.100.1 example.com A +short

Expected:

198.51.100.10

On Node B, check that the zone transferred:

dig @198.51.100.1 example.com SOA +short
ns1.example.com. admin.example.com. 2026031901 3600 900 604800 300

Check BIND9 logs for the transfer:

journalctl -u named --no-pager | grep "transfer of"

Expected:

transfer of 'example.com/IN' from 203.0.113.10#53: Transfer status: success

The serial number must match on both nodes. If the secondary shows a different serial, the transfer failed. Check TSIG key consistency and firewall rules.

What firewall rules does a BGP anycast DNS node need?

Lock down each node with nftables. Only allow what is needed: SSH for management, BGP from the upstream router, and DNS from everywhere.

Create /etc/nftables.conf:

#!/usr/sbin/nft -f

flush ruleset

table inet filter {
    chain input {
        type filter hook input priority 0; policy drop;

        # Connection tracking
        ct state established,related accept
        ct state invalid drop

        # Loopback
        iif lo accept

        # ICMP and ICMPv6 (needed for path MTU discovery and diagnostics)
        ip protocol icmp accept
        ip6 nexthdr icmpv6 accept

        # SSH (restrict to your management IPs in production)
        tcp dport 22 accept

        # BGP from upstream router only (IPv4)
        ip saddr 169.254.169.1 tcp dport 179 accept
        ip saddr 169.254.169.1 tcp sport 179 accept

        # BGP from upstream router only (IPv6)
        ip6 saddr 2001:db8:1::1 tcp dport 179 accept
        ip6 saddr 2001:db8:1::1 tcp sport 179 accept

        # DNS on anycast IP
        ip daddr 198.51.100.1 udp dport 53 accept
        ip daddr 198.51.100.1 tcp dport 53 accept
        ip6 daddr 2001:db8:abcd::1 udp dport 53 accept
        ip6 daddr 2001:db8:abcd::1 tcp dport 53 accept

        # Zone transfers from the other node (TSIG-authenticated at app layer,
        # but we also restrict at network layer)
        ip saddr 203.0.113.20 tcp dport 53 accept
        ip saddr 203.0.113.10 tcp dport 53 accept

        # Log and drop everything else
        log prefix "nftables-drop: " limit rate 5/minute
        drop
    }

    chain forward {
        type filter hook forward priority 0; policy drop;
    }

    chain output {
        type filter hook output priority 0; policy accept;
    }
}

Adapt the ip saddr and ip6 saddr lines on Node B (swap the zone transfer peer IPs, and use Node B's upstream IPv6 neighbor). In production, restrict SSH to your management CIDR.

Apply and enable:

systemctl enable --now nftables

List the active rules to confirm they loaded:

nft list ruleset

Then test DNS still works from outside the server:

# Run this from your local machine, NOT the server
dig @198.51.100.1 example.com A +short

If DNS stops working after applying firewall rules, check the daddr matches your anycast IP exactly and that the anycast0 interface is up.

How do I health-check DNS and withdraw BGP routes on failure?

A health-check script monitors BIND9 and signals BIRD2 to withdraw the anycast route when DNS is down. The method: bring the anycast0 interface down, which makes BIRD2 withdraw the prefix automatically since the direct protocol no longer sees the route.

Why not just rely on BGP timers?

With default BGP timers (90s hold, 30s keepalive), failover takes up to 90 seconds. A health check detects DNS failure in seconds and triggers withdrawal immediately. The table below shows the impact:

Method Detection time Convergence
BGP hold timer expiry 90 seconds 90-180 seconds
BFD (if available) <1 second 1-3 seconds
Health-check script 5-15 seconds 35-45 seconds (+ BGP propagation)

BFD only detects link/session failure, not application failure. The health-check script catches BIND9 crashes, misconfigurations, and disk-full conditions that BFD cannot see.

Create a dedicated user

The health check runs as a dedicated user with minimal privileges:

useradd --system --no-create-home --shell /usr/sbin/nologin anycast-healthcheck

Grant it permission to control the anycast interface. Create a sudoers rule:

cat > /etc/sudoers.d/anycast-healthcheck << 'EOF'
anycast-healthcheck ALL=(root) NOPASSWD: /usr/sbin/ip link set anycast0 up, /usr/sbin/ip link set anycast0 down
EOF
chmod 440 /etc/sudoers.d/anycast-healthcheck

Write the health-check script

Create /usr/local/bin/anycast-healthcheck.sh:

#!/bin/bash
# Anycast DNS health check with hysteresis
# Withdraws BGP route by downing anycast0 when BIND9 is unhealthy

set -euo pipefail

ANYCAST_IF="anycast0"
CHECK_IP="127.0.0.1"
CHECK_DOMAIN="example.com"
CHECK_TYPE="SOA"

# Hysteresis: 3 failures to withdraw, 2 successes to re-announce
FAIL_THRESHOLD=3
RECOVER_THRESHOLD=2
CHECK_INTERVAL=5

fail_count=0
recover_count=0
is_withdrawn=false

log() {
    logger -t anycast-healthcheck "$1"
}

check_dns() {
    dig +time=2 +tries=1 @"${CHECK_IP}" "${CHECK_DOMAIN}" "${CHECK_TYPE}" > /dev/null 2>&1
}

withdraw_route() {
    if [ "$is_withdrawn" = false ]; then
        sudo /usr/sbin/ip link set "${ANYCAST_IF}" down
        is_withdrawn=true
        log "WITHDRAW: ${ANYCAST_IF} down after ${FAIL_THRESHOLD} consecutive failures"
    fi
}

announce_route() {
    if [ "$is_withdrawn" = true ]; then
        sudo /usr/sbin/ip link set "${ANYCAST_IF}" up
        is_withdrawn=true  # will be set to false after verification
        # Verify the interface came back
        sleep 1
        if ip link show "${ANYCAST_IF}" | grep -q "UP"; then
            is_withdrawn=false
            recover_count=0
            log "ANNOUNCE: ${ANYCAST_IF} up after ${RECOVER_THRESHOLD} consecutive successes"
        else
            log "ERROR: failed to bring ${ANYCAST_IF} up"
        fi
    fi
}

log "Starting anycast health check for ${CHECK_DOMAIN} on ${CHECK_IP}"

while true; do
    if check_dns; then
        fail_count=0
        recover_count=$((recover_count + 1))
        if [ "$recover_count" -ge "$RECOVER_THRESHOLD" ]; then
            announce_route
        fi
    else
        recover_count=0
        fail_count=$((fail_count + 1))
        log "DNS check failed (${fail_count}/${FAIL_THRESHOLD})"
        if [ "$fail_count" -ge "$FAIL_THRESHOLD" ]; then
            withdraw_route
        fi
    fi
    sleep "${CHECK_INTERVAL}"
done

Set permissions:

chmod 755 /usr/local/bin/anycast-healthcheck.sh

Create a systemd service

Create /etc/systemd/system/anycast-healthcheck.service:

[Unit]
Description=Anycast DNS health check
After=bird.service named.service
Wants=bird.service named.service

[Service]
Type=simple
User=anycast-healthcheck
ExecStart=/usr/local/bin/anycast-healthcheck.sh
Restart=always
RestartSec=5

# Hardening
NoNewPrivileges=no
ProtectSystem=strict
ProtectHome=yes
PrivateTmp=yes
ReadOnlyPaths=/
ReadWritePaths=/run

[Install]
WantedBy=multi-user.target

NoNewPrivileges=no is required here because the script uses sudo to toggle the interface. The sudoers rule restricts which commands the user can run.

The [Unit] section uses Wants= instead of Requires=. With Requires=, systemd would stop the health check when named stops. That defeats the purpose: the health check must keep running to detect DNS failure and withdraw the route.

Enable and start:

systemctl daemon-reload
systemctl enable --now anycast-healthcheck

Check that it is running:

systemctl status anycast-healthcheck

The logs should show the startup message:

journalctl -u anycast-healthcheck -f
Starting anycast health check for example.com on 127.0.0.1

How do I test anycast DNS failover?

Run these tests from an external machine, not from either node.

Step 1: Confirm both nodes are announcing

From your local machine:

dig @198.51.100.1 example.com A +short

Confirm you get a response. Run a traceroute to see which node you reach:

traceroute -n 198.51.100.1

The path shows which location is closest to you.

Step 2: Simulate DNS failure on Node A

On Node A, stop BIND9:

systemctl stop named

Watch the health-check logs:

journalctl -u anycast-healthcheck -f

Expected sequence:

DNS check failed (1/3)
DNS check failed (2/3)
DNS check failed (3/3)
WITHDRAW: anycast0 down after 3 consecutive failures

After 15 seconds (3 checks at 5-second intervals), the health check brings down anycast0. BIRD2 detects the interface change and withdraws the route.

Check that the route is gone:

birdc show route export upstream4

The table should be empty. The upstream router removes 198.51.100.0/24 from Node A and forwards all traffic to Node B.

Step 3: Check client failover

From your external machine, query again:

dig @198.51.100.1 example.com A +short +time=5

The first query after withdrawal may time out (the old route is still cached in some routers). Subsequent queries reach Node B. BGP convergence typically takes 30-90 seconds depending on your upstream's configuration.

Step 4: Restore Node A

systemctl start named

The health check detects DNS is back after 2 consecutive successes (10 seconds) and brings anycast0 up. BIRD2 re-announces the route.

journalctl -u anycast-healthcheck --no-pager | tail -5

Expected:

ANNOUNCE: anycast0 up after 2 consecutive successes

Check BGP sessions:

birdc show protocols

Both BGP sessions should show Established again.

Step 5: Measure convergence time

For precise measurement, run a continuous dig loop from an external machine:

while true; do
    echo -n "$(date +%H:%M:%S) "
    dig @198.51.100.1 example.com A +short +time=1 +tries=1 || echo "TIMEOUT"
    sleep 1
done

Stop BIND9 on the node you are currently hitting. Count the seconds between the last successful response and the first successful response from the other node. On Virtua infrastructure with default BGP timers, expect 30-60 seconds of partial unavailability.

How does BIRD2 compare to other routing daemons for anycast?

Feature BIRD2 FRRouting ExaBGP
Config language Declarative filters Cisco-like CLI JSON/API
Anycast integration Direct protocol on dummy interface Static route + redistribute External script announces/withdraws
Health-check method Interface up/down triggers route change vtysh commands from script Process manager built-in
IPv6 support Unified config (ipv4/ipv6 channels) Separate address families Manual per-family
Filter complexity Full filter language with functions Route maps, prefix lists External logic only
Memory footprint Low (~10 MB for small table) Medium (~30 MB) Very low (~5 MB)
Production maturity IXPs, large-scale DNS (used by Cloudflare, RIPE) ISPs, data centers Small deployments, monitoring

BIRD2's advantage for anycast: the protocol direct on a dummy interface gives you automatic route withdrawal when the interface goes down. No external scripting needed for the BGP part. The health check only needs to toggle the interface.

How do I synchronize DNS zones across anycast nodes?

Zone synchronization uses BIND9's built-in primary/secondary replication over AXFR, authenticated with TSIG. The configuration was set up in the BIND9 section above. Here are operational details.

Zone updates flow one way: edit the zone file on Node A (primary), increment the serial, and reload:

# On Node A after editing the zone file
named-checkzone example.com /var/lib/bind/db.example.com
zone example.com/IN: loaded serial 2026031902
OK

Always run named-checkzone before reloading. It catches syntax errors.

rndc reload example.com

Node B receives a NOTIFY from Node A and pulls the updated zone via AXFR. On Node B, check the serial:

dig @127.0.0.1 example.com SOA +short

The serial should match. If it does not, check:

  1. Firewall allows TCP 53 from Node A to Node B
  2. TSIG key is identical on both nodes
  3. journalctl -u named on Node B for transfer errors

Adding a third node: configure it as another secondary. Add its IP to the also-notify list and the nftables rules on Node A. No changes needed on existing secondaries.

What about IPv6 BGP anycast?

The BIRD2 configuration above already includes IPv6. The protocol direct anycast block has both ipv4 and ipv6 channels. The upstream6 BGP session exports the /48 prefix.

Check the IPv6 route export:

birdc show route export upstream6
BIRD 2.0.12 ready.
Table master6:
2001:db8:abcd::/48   unicast [anycast 2026-03-19] * (240)
	dev anycast0

Test DNS over IPv6:

dig @2001:db8:abcd::1 example.com AAAA +short

Troubleshooting

BGP session stuck in Active/Connect:

journalctl -u bird -f

Common causes: wrong neighbor IP, wrong ASN, MD5 password mismatch, firewall blocking TCP 179. Run birdc show protocols all upstream4 for detailed error messages.

BIND9 not listening on anycast IP:

ss -tlnp | grep ':53'

If you only see 127.0.0.1:53, check that listen-on includes the anycast IP and that the anycast0 interface was up before BIND9 started. Restart BIND9 after the interface is up: systemctl restart named.

Zone transfer failing:

journalctl -u named | grep -i "tsig\|transfer\|refused"

Check the key file is readable by the bind user: ls -la /etc/bind/anycast-transfer.key. Run a manual transfer test: dig @203.0.113.10 example.com AXFR -k /etc/bind/anycast-transfer.key.

Health check not withdrawing routes:

journalctl -u anycast-healthcheck -f

Check sudo permissions: sudo -u anycast-healthcheck sudo -l. The user should be able to run /usr/sbin/ip link set anycast0 up and down without a password.

Queries timing out after failover:

DNS clients cache responses. The TTL in the zone file (300 seconds / 5 minutes) determines how long clients use stale data. BGP convergence adds 30-90 seconds. Total failover time from the client perspective: up to the TTL plus convergence time. Lower the SOA minimum TTL if faster failover matters, but do not go below 60 seconds for authoritative zones.

Production checklist

Before going live:

  • ROA records created and validated (rpki-client or your RIR portal)
  • BGP sessions established on all nodes (birdc show protocols)
  • Anycast prefix visible in global routing tables (use a looking glass)
  • BIND9 responding on the anycast IP from external networks
  • Zone serial matches on all nodes
  • TSIG key permissions set to 640, owned by root:bind
  • nftables rules active on all nodes
  • Health-check service enabled and running
  • Failover tested: stopped BIND9, confirmed route withdrawal, confirmed recovery
  • Monitoring configured: alert on BGP session down, BIND9 process, health-check failures
  • journalctl -u bird and journalctl -u named show no errors

For the parent guide covering BGP fundamentals and bringing your own IP space, see BGP and Bring Your Own IP on a VPS: The Complete Guide. For BGP route filtering and security hardening, see BGP Route Filtering: Prefix Lists, AS-Path Filters, Bogon Rejection, and GTSM.


Copyright 2026 Virtua.Cloud. All rights reserved. This content is original work by the Virtua.Cloud team. Reproduction, republication, or redistribution without written permission is prohibited.