Nginx Administration on a VPS

10 min read·Matthieu·NginxWeb ServerReverse ProxyVPS|

A structured map of Nginx administration for VPS owners. Covers what Nginx does, when to use it, and links to every tutorial you need to run it in production.

Nginx powers a large share of the web. It serves static files, proxies requests to backend applications, terminates TLS, and balances load across multiple servers. All from a single process with a small memory footprint. This guide maps out Nginx administration on a VPS, from installation to production monitoring.

This page is a map. It covers what Nginx does, when each feature matters, and links to focused tutorials for each topic. Whether you are deploying your first side project or running an AI inference endpoint behind a reverse proxy, start here and follow the path that matches your use case.

What Nginx Does

Nginx (pronounced "engine-x") is an event-driven web server and reverse proxy. It was created in 2004 to solve the C10K problem: handling 10,000 concurrent connections on a single server. Unlike thread-per-connection servers like Apache's prefork model, Nginx uses an asynchronous event loop. One worker process handles thousands of connections without spawning a new thread for each one.

On a VPS, that means:

  • Low memory usage. A typical Nginx worker process uses 2-10 MB of RAM. Apache prefork workers use 10-40 MB each.
  • High concurrency. A single worker can handle thousands of simultaneous connections. The limit is usually the OS, not Nginx.
  • Predictable performance under load. Because Nginx does not spawn new processes per request, memory consumption stays flat as traffic increases.

The current stable release is 1.28.x (1.28.2 as of March 2026). The mainline branch (1.29.x) gets new features first. Stable receives only high-severity bug fixes. For production VPS setups, stable is the right choice.

All tutorials in this series target Nginx installed from the official repositories on Debian 12 or Ubuntu 24.04. Both are actively supported and widely deployed on production servers. Distro default repos often ship older Nginx versions. The official repo gives you the latest stable with proper signing keys.

Core Use Cases

Nginx fills four roles on a VPS. Most production setups use two or more at the same time.

Static File Server

Nginx serves HTML, CSS, JavaScript, images, and fonts directly from disk. The sendfile system call moves data from disk to network socket without copying it through userspace. This avoids the overhead of reading file contents into application memory and writing them back out.

Combined with gzip or brotli compression and cache headers (expires, Cache-Control), Nginx delivers static assets faster than any application server. A Node.js or Python process serving static files wastes CPU cycles on work that Nginx handles at the kernel level.

For static sites, single-page apps, or full-stack frontends, Nginx is the right tool. Even with dynamic backends, offloading static assets to Nginx frees your application to focus on requests that need logic.

Reverse Proxy

A reverse proxy sits between clients and your backend application. Nginx receives the HTTP request, forwards it to a backend process (Node.js, Python, Go, Ruby, or any HTTP service), and returns the response to the client.

Why not expose the backend directly? Several reasons:

  • TLS termination happens in one place. You configure certificates once in Nginx instead of in every application.
  • Header forwarding passes the real client IP to your backend via X-Real-IP and X-Forwarded-For. Without this, your app sees only 127.0.0.1.
  • Buffering absorbs slow client connections. Nginx reads the full response from your fast backend, frees the backend connection, and trickles the response to the slow client on its own. Your backend handles the next request instead of waiting.
  • WebSocket support works through Nginx with the right Upgrade and Connection headers.

This is also how you put a web interface in front of self-hosted AI tools. An Ollama instance running on localhost:11434 stays off the public internet while Nginx handles authentication and HTTPS on the frontend. The How to Configure Nginx as a Reverse Proxy tutorial includes a complete Ollama configuration with timeout tuning for long-running inference requests.

TLS Termination

Nginx handles the TLS handshake and decryption so backend applications receive plain HTTP on localhost. This centralizes certificate management and offloads CPU-intensive cryptographic operations from your application code.

The TLS handshake involves asymmetric cryptography (RSA or ECDSA key exchange), which is orders of magnitude more expensive than symmetric encryption for the data transfer itself. By handling this in Nginx, your backend processes stay focused on business logic.

With Let's Encrypt and Certbot, you get free, auto-renewing certificates. Nginx's ssl_certificate and ssl_certificate_key directives point to the certificate files. A systemd timer runs certbot renew automatically. The Set Up Let's Encrypt SSL/TLS for Nginx on Debian 12 and Ubuntu 24.04 tutorial covers the full setup including OCSP stapling and HTTP-to-HTTPS redirect.

For European deployments where data sovereignty matters, handling TLS termination on your own VPS means encrypted traffic is decrypted only on infrastructure you control. No third-party CDN or load balancer sees your plaintext traffic.

Load Balancer

When one backend is not enough, Nginx distributes requests across multiple upstream servers. It supports three built-in algorithms:

  • Round-robin (default): requests go to each backend in turn
  • Least connections: requests go to the backend with the fewest active connections
  • IP hash: requests from the same client IP always go to the same backend (useful for session affinity)

Health checks remove unresponsive backends from the pool automatically. If a backend returns errors or times out, Nginx stops sending traffic to it until it recovers.

For most VPS setups, load balancing comes into play when you scale horizontally across multiple application instances or run blue-green deployments with zero-downtime releases.

What You Need

Before starting any tutorial in this series, you need:

  • A VPS running Debian 12 or Ubuntu 24.04 with root or sudo access. Virtua Cloud VPS plans work out of the box for everything covered here.
  • SSH access with key-based authentication. Password authentication should be disabled. Brute-force bots start hitting SSH within minutes of a server going live.
  • A domain name with an A record pointing to your server's IP address. Required for TLS certificates. Optional for the install tutorial.
  • Basic comfort with the Linux command line: navigating directories, editing files with vim or nano, running commands with sudo.

If you are new to VPS setup, get SSH keys and a firewall configured before touching Nginx. A server exposed to the internet without these basics is a liability.

The Tutorial Series

Each article below focuses on one topic. They build on each other in order, but you can jump to any article if you already have the prerequisites covered.

1. Install Nginx

Get Nginx running from the official repositories. Covers both Debian 12 and Ubuntu 24.04 with GPG key verification, apt install, systemd management (start, stop, reload, status), and firewall rules for both ufw and nftables. Includes verification steps with curl and ss to confirm Nginx is listening on the right ports.

Install Nginx on Debian 12 and Ubuntu 24.04 from the Official Repository

2. Understand the Config File Structure

Before editing any config, understand how Nginx organizes it. The main context, events, http, server, and location blocks form a hierarchy. Directives set in a parent context are inherited by child contexts unless explicitly overridden. This article covers all major contexts (including stream and upstream), the include directive for modular configs, and the sites-available/sites-enabled pattern used on Debian and Ubuntu.

Nginx Config File Structure Explained

3. Set Up Server Blocks

Host multiple domains on a single VPS. Server blocks (Nginx's equivalent of Apache virtual hosts) let you run separate sites with independent configs, access logs, error logs, and document roots. Covers directory structure, creating server block files, enabling and disabling sites with symlinks, the default_server directive, server_name matching order, and per-vhost logging.

Nginx Server Blocks: Host Multiple Domains on One VPS

4. Configure SSL/TLS with Let's Encrypt

Add HTTPS to your sites using Certbot and Let's Encrypt. Covers DNS prerequisites, snap-based Certbot installation (the current recommended method), certificate issuance with the Nginx plugin, automatic renewal via systemd timers, HTTP-to-HTTPS redirect, and OCSP stapling. Every step includes a verification command.

Set Up Let's Encrypt SSL/TLS for Nginx on Debian 12 and Ubuntu 24.04

5. Set Up a Reverse Proxy

Put Nginx in front of your backend applications. Covers proxy_pass syntax, header forwarding (X-Real-IP, X-Forwarded-For, X-Forwarded-Proto), WebSocket proxying with Upgrade headers, buffering controls (proxy_buffering, proxy_buffer_size), and timeout tuning (proxy_connect_timeout, proxy_read_timeout). Includes a working example for proxying to a local Ollama instance with extended timeouts for inference requests that can take 30 seconds or more.

How to Configure Nginx as a Reverse Proxy

6. Harden Security

Lock down Nginx beyond the defaults. Each directive is paired with the specific threat it mitigates. Covers security headers (CSP, X-Frame-Options, X-Content-Type-Options, Referrer-Policy, Permissions-Policy), hiding server tokens, disabling unused HTTP methods, IP-based access restrictions, TLS cipher suite selection using the Mozilla Modern profile, and HSTS with preload considerations. Verification using securityheaders.com and SSL Labs.

Nginx Security Hardening on Ubuntu and Debian

7. Tune Performance

Optimize Nginx for production traffic. Covers worker_processes auto and worker_connections sizing, epoll and multi_accept, keepalive tuning, gzip configuration (compression levels, MIME types, minimum length), brotli module setup, static file caching with expires and Cache-Control, open_file_cache for reducing disk I/O, proxy buffer sizing, and HTTP/2 activation. Includes a benchmarking methodology using wrk so you can measure the impact of each change.

For sites expecting heavy traffic, a high-CPU VPS gives you more headroom for TLS handshakes and compression.

Nginx Performance Tuning on a VPS

8. Rate Limiting and DDoS Protection

Protect your endpoints from abuse and volumetric attacks. Covers limit_req_zone and limit_req with burst and nodelay parameters, limit_conn for connection throttling, custom 429 error pages, dry_run mode for safe testing without blocking real traffic, and fail2ban integration with a working filter and jail config for banning repeat offenders at the firewall level.

Nginx Rate Limiting and DDoS Protection

9. Logging and Monitoring

Set up production-grade observability. Covers custom log_format definitions including a JSON structured format with escape=json (ready for log shippers like Filebeat or Vector), conditional logging with map to filter health checks and static assets, log rotation with a logrotate config, real-time analysis with GoAccess, and Prometheus metrics via the stub_status module and prometheus-nginxlog-exporter.

10. Nginx Cheatsheet

Quick reference for daily operations. systemd commands, common config snippets (redirect, proxy_pass, SSL block, rate limit, gzip), log file locations, debugging with nginx -T (dump full merged config) and curl -I (inspect response headers), and common error codes (502, 504, 413, 403) with their Nginx-specific causes and fixes.

Nginx Cheatsheet: Commands, Config Snippets and Error Fixes

When to Let Someone Else Handle It

Managing Nginx in production means keeping up with security patches, certificate renewals, config drift, and log rotation. If you would rather focus on your application code and leave the infrastructure to someone else, managed server hosting takes that off your plate. You still get full Nginx flexibility without the maintenance overhead.

Where to Go First

Deploying your first site? Start with the install tutorial (Install Nginx on Debian 12 and Ubuntu 24.04 from the Official Repository) and work through the series in order.

Already have Nginx running? Jump to the topic you need. The config structure article (Nginx Config File Structure Explained) fills in gaps if you have been copy-pasting configs without understanding the hierarchy.

Running AI workloads? Go straight to the reverse proxy tutorial (How to Configure Nginx as a Reverse Proxy) for the Ollama reverse proxy setup. Then lock it down with the security hardening guide (Nginx Security Hardening on Ubuntu and Debian) and rate limiting tutorial (Nginx Rate Limiting and DDoS Protection).

Just need a quick command? The cheatsheet (Nginx Cheatsheet: Commands, Config Snippets and Error Fixes) has everything in one page.