Docker Networking on a VPS: Bridge, Host, and Macvlan Explained

12 min read·Matthieu·VPSIPv6Docker ComposeNetworkingDocker|

How Docker bridge, host, and macvlan networks work on a single VPS. Covers custom bridge DNS resolution, port publishing, IPv6 configuration, and network isolation with Docker Compose.

You have one VPS. Multiple containers need to talk to each other and to the outside world. Docker gives you three network drivers that matter here: bridge, host, and macvlan. Each makes different tradeoffs between isolation, performance, and convenience.

This article explains when to pick each driver, how to wire containers together using custom bridge networks and Docker Compose, and how to avoid the common mistakes that leave services unreachable or accidentally exposed.

Prerequisites: Basic Docker CLI knowledge Docker Commands Cheatsheet and familiarity with Docker Compose Docker Compose for Multi-Service VPS Deployments.

What is the difference between Docker bridge and host networking?

Bridge networking gives each container its own network namespace with a virtual ethernet pair (veth) connecting it to a software bridge on the host. Containers get private IPs (typically in the 172.17.0.0/16 range) and reach the outside world through NAT managed by iptables. Host networking removes the network namespace entirely: the container shares the host's network stack, binding directly to host ports with no address translation.

Here is how they compare at a glance:

Criteria Custom Bridge Host Macvlan
Network isolation Full (own namespace) None (shares host) Full (own MAC address)
DNS resolution Automatic by container name Uses host DNS No built-in DNS
Port mapping Required (-p) Not needed (direct bind) Not needed (real IP)
Performance overhead Small (NAT + veth) None None
Security risk Low (isolated by default) High (no network boundary) Medium (requires promiscuous mode)
VPS relevance Primary choice Monitoring, high-throughput Rarely usable

Why should you use a custom bridge network instead of the default bridge?

The default bridge network (called docker0) does not provide DNS resolution between containers. Containers on it can only reach each other by IP address, which changes every time a container restarts. Custom bridge networks give you automatic DNS, better isolation, and on-the-fly connect/disconnect. Use a custom bridge for everything. Treat the default bridge as a legacy artifact.

The DNS problem

When Docker starts, it creates a default bridge network. Every container without an explicit --network flag lands on it. But containers on the default bridge cannot resolve each other by name:

# Start two containers on the default bridge
docker run -d --name web nginx:alpine
docker run -d --name client alpine sleep 3600

# Try name resolution - this fails
docker exec client ping -c1 web
# ping: bad address 'web'

Now try the same with a custom bridge:

# Create a custom bridge network
docker network create app-net

# Start containers on it
docker run -d --name web2 --network app-net nginx:alpine
docker run -d --name client2 --network app-net alpine sleep 3600

# Name resolution works
docker exec client2 ping -c1 web2
# PING web2 (172.18.0.2): 56 data bytes
# 64 bytes from 172.18.0.2: seq=0 ttl=64 time=0.089 ms

How Docker's embedded DNS works

Every container on a user-defined network gets 127.0.0.11 as its DNS server. This is Docker's embedded DNS server. It resolves container names and service aliases to their current IP addresses. If the name is not a container, it forwards the query to the host's configured DNS servers.

docker exec client2 cat /etc/resolv.conf
# nameserver 127.0.0.11

The DNS server runs inside the container's network namespace but the actual resolution happens in the Docker daemon on the host. To avoid conflicts with services that might use port 53 inside the container, Docker's DNS listener uses a random high port internally and redirects queries via iptables rules.

There is no IPv6 equivalent address. The 127.0.0.11 address works even in IPv6-only containers.

The embedded DNS server returns a TTL of 600 seconds (10 minutes) for container name records. This matters for blue-green deployments: if you replace a container, other containers may still resolve the old IP for up to 10 minutes. Applications that cache DNS aggressively (Java caches indefinitely by default) will hold onto stale addresses even longer.

Containers on the same custom bridge network expose all ports to each other without any -p flag. Port publishing only controls access from outside the network (the host or the internet). Two containers on app-net can talk on any port freely.

Default bridge vs custom bridge summary

Feature Default bridge (docker0) Custom bridge
DNS by container name No Yes
Dynamic connect/disconnect No (requires container restart) Yes
Configurable per-network No (requires daemon.json edit + restart) Yes
Isolation from other stacks No (all unassigned containers share it) Yes
Recommended for production No Yes

When should you use host networking on a VPS?

Use host networking when a container needs raw network performance or must bind to many ports dynamically. The container shares the host's network stack directly, bypassing NAT and the virtual ethernet bridge. This eliminates measurable overhead for high-throughput workloads.

Typical use cases on a VPS:

  • Monitoring agents like Prometheus node-exporter or Netdata that need to see all host interfaces and traffic
  • DNS servers that must listen on the host's actual IP
  • Performance-sensitive services where NAT overhead matters (high packet rates, UDP-heavy workloads)
# Run a container with host networking
docker run -d --name node-exporter --network host \
  prom/node-exporter:latest

The --publish flag is ignored with host networking. The container binds directly to host ports. If port 9100 is already in use on the host, the container fails to start.

Security tradeoff

Host networking provides the same network isolation as running the process directly on the host: none. The container can see all network interfaces, bind to any port, and sniff traffic. Use it only when you have a specific reason.

# Verify which ports a host-networked container opened
ss -tlnp | grep node_exporter
# LISTEN 0 4096 *:9100 *:* users:(("node_exporter",pid=12345,fd=3))

Notice the output shows the process listening on *:9100, meaning all interfaces. With bridge networking, you control this with the -p flag. With host networking, the application itself decides what to bind to.

What is macvlan networking and do you need it on a VPS?

Macvlan assigns each container its own MAC address, making it appear as a separate physical device on the network. The container gets a real IP from your LAN subnet and is reachable directly without port mapping.

On a VPS, you almost certainly do not need macvlan. Here is why:

  • Most cloud providers block it. Macvlan requires the physical NIC to operate in promiscuous mode. VPS hypervisors typically block this at the virtual switch level.
  • No LAN to join. Your VPS has one public interface. Macvlan is designed for environments where you want containers to have their own addresses on an existing LAN, like a home lab with IoT devices or legacy applications that expect a dedicated IP.
  • Host-container communication breaks. By design, the Linux kernel prevents a macvlan container from communicating with the host it runs on. You need workarounds like attaching a second bridge network.

If your VPS provider gives you a dedicated server with full NIC access and multiple IPs, macvlan becomes viable. For standard VPS deployments, stick with custom bridge networks.

For reference, creating a macvlan network looks like this (you will likely not need this on a VPS):

docker network create -d macvlan \
  --subnet=192.168.1.0/24 \
  --gateway=192.168.1.1 \
  -o parent=eth0 \
  macnet

The parent option specifies the host interface to attach to. The container gets its own IP from the subnet and its own MAC address. Other devices on the LAN can reach it directly.

What is the difference between exposing and publishing a Docker port?

EXPOSE in a Dockerfile is documentation. It declares which port the application listens on. It does not open that port to the host or the outside world. --publish (or -p) at runtime creates an actual port mapping from the host to the container. Without -p, no external traffic reaches the container.

# In your Dockerfile - this is documentation only
EXPOSE 8080
# This actually maps host:8080 -> container:8080
docker run -d -p 8080:8080 myapp

# This binds to localhost only - external traffic cannot reach it
docker run -d -p 127.0.0.1:8080:8080 myapp

The 127.0.0.1 binding pattern

When you run services behind a reverse proxy like Nginx, Traefik, or Caddy , publish container ports on 127.0.0.1 only. This prevents direct access from the internet, forcing all traffic through your reverse proxy where TLS termination, rate limiting, and access control happen.

# docker-compose.yml - binding to localhost only
services:
  app:
    image: myapp:latest
    ports:
      - "127.0.0.1:3000:3000"  # Only reachable from localhost

Verify the binding:

ss -tlnp | grep 3000
# LISTEN 0 4096 127.0.0.1:3000 0.0.0.0:* users:(("docker-proxy",...))

The output shows 127.0.0.1:3000, not 0.0.0.0:3000. External traffic on your VPS public IP cannot reach port 3000 directly.

Without the 127.0.0.1 prefix, Docker publishes on all interfaces by default. On a VPS with a public IP, this means your service is exposed to the internet, bypassing your reverse proxy and potentially your firewall Fix Docker Bypassing UFW: 4 Tested Solutions for Your VPS.

Port publishing quick reference

Syntax Binds to Accessible from
-p 8080:80 0.0.0.0:8080 + [::]:8080 Everywhere (IPv4 + IPv6)
-p 127.0.0.1:8080:80 127.0.0.1:8080 Localhost only
-p 0.0.0.0:8080:80 0.0.0.0:8080 All IPv4 interfaces
-p [::1]:8080:80 [::1]:8080 IPv6 localhost only

How do you isolate container networks on a single VPS?

Create separate bridge networks for each application stack. Containers on different networks cannot communicate unless you explicitly connect them to a shared network. For databases and internal services, use the internal: true flag to block all outbound internet access.

Multi-network architecture

A typical production setup on a VPS looks like this:

                 Internet
                    |
              [Reverse Proxy]
               /          \
         [frontend]    [api-net]
            |             |
          webapp        api-server
                          |
                     [db-net: internal]
                          |
                       postgres

The reverse proxy connects to both frontend and api-net. The API server connects to api-net and db-net. PostgreSQL sits on db-net only with no route to the internet.

Here is the Docker Compose file for this setup:

services:
  proxy:
    image: nginx:alpine
    ports:
      - "80:80"
      - "443:443"
    networks:
      - frontend
      - api-net

  webapp:
    image: mywebapp:latest
    networks:
      - frontend

  api:
    image: myapi:latest
    environment:
      - DATABASE_URL=postgresql://appuser:${DB_PASSWORD}@db:5432/appdb
    networks:
      - api-net
      - db-net

  db:
    image: postgres:16-alpine
    environment:
      - POSTGRES_PASSWORD_FILE=/run/secrets/db_password
    volumes:
      - pgdata:/var/lib/postgresql/data
    networks:
      - db-net

networks:
  frontend:
    driver: bridge
  api-net:
    driver: bridge
  db-net:
    driver: bridge
    internal: true  # No internet access for the database network

volumes:
  pgdata:

The internal: true on db-net means PostgreSQL cannot reach the internet. It cannot pull updates, phone home, or be exploited as a pivot for outbound connections. The API server can reach the database because it is attached to both api-net and db-net.

Verify network isolation

After starting the stack, confirm the isolation:

# List networks created by Compose
docker network ls --filter "name=myproject"

# Inspect a network to see which containers are connected
docker network inspect myproject_db-net --format '{{range .Containers}}{{.Name}} {{end}}'
# myproject-api-1 myproject-db-1

# Confirm the database cannot reach the internet
docker exec myproject-db-1 ping -c1 -W2 8.8.8.8
# ping: sendto: Network unreachable

"Network unreachable" means the internal: true flag is working. The database container has no default gateway.

How do you enable IPv6 for Docker containers?

Enable IPv6 globally in /etc/docker/daemon.json, then enable it per network with a subnet. Docker supports dual-stack (IPv4 + IPv6) out of the box on Linux. The ip6tables parameter, enabled by default, handles IPv6 firewall rules for containers.

Step 1: Configure the daemon

Edit /etc/docker/daemon.json:

{
  "ipv6": true,
  "fixed-cidr-v6": "fd00:dead:beef::/64",
  "ip6tables": true,
  "default-address-pools": [
    { "base": "172.17.0.0/16", "size": 24 },
    { "base": "fd00:dead:beef::/48", "size": 64 }
  ]
}

The fixed-cidr-v6 assigns an IPv6 /64 subnet to the default bridge. The default-address-pools block tells Docker how to allocate subnets for new networks: /24 blocks from the IPv4 range and /64 blocks from the IPv6 range.

Use a ULA prefix (fd00::/8) for private container-to-container traffic. If your VPS has a public IPv6 range assigned by your provider, you can use a subnet from that range instead for containers that need public IPv6 addresses.

Restart Docker to apply:

sudo systemctl restart docker

Verify IPv6 is enabled:

docker network inspect bridge --format '{{.EnableIPv6}}'
# true

Step 2: Create a dual-stack network

docker network create --ipv6 --subnet 172.20.0.0/24 --subnet fd00:dead:beef:1::/64 app-v6

Or in Docker Compose:

networks:
  app-v6:
    enable_ipv6: true
    ipam:
      config:
        - subnet: 172.20.0.0/24
        - subnet: fd00:dead:beef:1::/64

Step 3: Verify dual-stack connectivity

docker run --rm --network app-v6 alpine ip -6 addr show eth0
# inet6 fd00:dead:beef:1::2/64 scope global

IPv6 is only supported on Docker daemons running on Linux. On older systems, you may need to load the ip6_tables kernel module before creating IPv6 networks:

sudo modprobe ip6_tables

How do you define networks in Docker Compose?

Docker Compose creates a default network for each project automatically. Every service in the Compose file joins this network and can resolve other services by their service name. For most single-stack applications, this default behavior is all you need.

When you need multiple networks for isolation, define them explicitly in the networks section:

services:
  web:
    image: nginx:alpine
    ports:
      - "127.0.0.1:8080:80"
    networks:
      - public

  app:
    image: node:20-alpine
    networks:
      - public
      - private

  redis:
    image: redis:7-alpine
    networks:
      - private

networks:
  public:
    driver: bridge
  private:
    driver: bridge
    internal: true

In this file, web can reach app through the public network. app can reach both web and redis. redis can only reach app through private and has no internet access.

Using host networking in Compose

services:
  node-exporter:
    image: prom/node-exporter:latest
    network_mode: host
    pid: host
    restart: unless-stopped

The network_mode: host replaces any networks definition. You cannot combine host mode with custom networks on the same service.

Connecting to external networks

If a network was created outside Compose (by another stack or manually), reference it as external:

networks:
  shared-proxy:
    external: true

This lets multiple Compose projects share a network. A common pattern: one Compose stack runs the reverse proxy and creates the network. Other stacks declare it as external and connect their services to it.

Docker network commands quick reference

Command What it does
docker network ls List all networks
docker network create mynet Create a bridge network
docker network create --ipv6 --subnet fd00::/64 mynet Create a dual-stack network
docker network inspect mynet Show network details (subnet, containers, options)
docker network connect mynet container1 Attach a running container to a network
docker network disconnect mynet container1 Detach a container from a network
docker network prune Remove all unused networks
docker network rm mynet Remove a specific network

Scaling limits

Bridge networks become unstable when more than 1,000 containers connect to a single network. This is a Linux kernel limitation on the bridge device. If you are running many containers, split them across multiple networks by function rather than putting everything on one bridge.

Something went wrong?

Containers cannot resolve each other by name. You are probably on the default bridge. Create a custom network and attach both containers to it. Check with docker inspect <container> --format '{{json .NetworkSettings.Networks}}'.

Port is published but not reachable from outside. Check if it is bound to 127.0.0.1 instead of 0.0.0.0. Run ss -tlnp | grep <port> on the host. Also check your firewall rules Fix Docker Bypassing UFW: 4 Tested Solutions for Your VPS.

Container cannot reach the internet. The network might have internal: true set. Check with docker network inspect <network> --format '{{.Internal}}'. If it returns true, the network blocks outbound traffic by design.

IPv6 not working in containers. Verify ip6tables is enabled in daemon.json. On older kernels, load the module: sudo modprobe ip6_tables. Check the network has IPv6 enabled: docker network inspect <network> --format '{{.EnableIPv6}}'.

"Address already in use" with host networking. Another process (or container) is already bound to that port. Find it with ss -tlnp | grep <port>. Host networking means no port mapping, so conflicts are direct.

Next steps

Your container networking is set up. Here is what to tackle next:


Copyright 2026 Virtua.Cloud. All rights reserved. This content is original work by the Virtua.Cloud team. Reproduction, republication, or redistribution without written permission is prohibited.

Ready to try it yourself?

Master Docker networking on a dedicated VPS.

See VPS Plans