Self-Host Workflow Automation on a VPS with n8n

13 min read·Matthieu·n8nWorkflow AutomationDockerSelf-hostingAutomation|

Run unlimited workflow automations on your own VPS instead of paying per task. This guide covers what workflow automation means in practice, why self-hosting saves money and protects your data, and how n8n compares to Zapier, Make, and other open-source tools.

You have a form submission that needs to land in your CRM, trigger a Slack message, and update a spreadsheet. You could wire that up with custom code, or you could build it in five minutes with a visual workflow tool. This page explains what workflow automation is, why running it on your own VPS beats paying Zapier or Make per task, and how to get started with n8n.

What is workflow automation?

Workflow automation connects apps and services through trigger-based pipelines. A trigger fires (a webhook arrives, a schedule ticks, a database row changes), and a sequence of actions runs automatically: call an API, transform data, send a notification, write to a database. No glue code required.

Think of it as visual programming for integrations. Instead of writing a Node.js script that polls an API, transforms JSON, and posts to Slack, you drag nodes onto a canvas and connect them. Each node handles one step. The workflow engine manages execution, retries, and error handling.

If you have never used a workflow tool before: a node is a box on a canvas that does one thing (send an email, query a database, transform JSON). A trigger is the event that starts the workflow. A workflow is the full chain from trigger to final action. You connect nodes visually, and the engine runs them in order every time the trigger fires.

Some examples:

  • Form submission triggers CRM entry, assigns to sales rep, sends welcome email
  • New row in Airtable syncs to your PostgreSQL database every 5 minutes
  • API health check fails, workflow posts to Slack and creates a PagerDuty incident
  • RSS feed update triggers social media posts across platforms
  • Incoming email gets classified by an LLM, routed to the right team, auto-replied if it matches a template

The SaaS players in this space (Zapier, Make, IFTTT) all share one trait: they charge per execution. The more your automations run, the more you pay. Self-hosting removes that constraint entirely.

Why self-host workflow automation instead of using Zapier or Make?

Three reasons: cost, privacy, and control. Self-hosting your automation engine on a VPS means you pay a flat monthly fee regardless of how many workflows run. Your data never leaves your server. And no vendor can throttle your executions or sunset an integration you depend on.

How much does self-hosted workflow automation cost compared to Zapier?

SaaS automation platforms bill per task or per operation. That pricing model punishes growth. The more you automate, the more you pay. Self-hosting flips this: your VPS costs the same whether you run 10 workflows or 10,000.

Here is a concrete scenario: a 5-step workflow (1 trigger + 4 actions) running 100 times per day. That is 3,000 executions per month.

Zapier Make n8n (self-hosted)
Pricing model Per task (each action = 1 task) Per operation (each step = 1 op) Unlimited
Units consumed/month 12,000 tasks 15,000 operations N/A
Estimated monthly cost See Zapier pricing See Make pricing ~€6-15/mo (VPS)
Cost at 10x scale Scales linearly with usage Scales linearly with usage ~€6-15/mo (same VPS)

Zapier counts 4 tasks per execution (trigger is free, each action costs a task). At 12,000 tasks/month on this scenario, you need a plan well above their entry tiers. Check Zapier's pricing page for current rates. Make counts every step including the trigger, so 5 operations per run. At 15,000 operations/month, you likely exceed the included credits on their lower plans. Check Make's pricing page for current rates.

With n8n on a VPS, the cost stays flat. A VPS with 4 vCPU and 8 GB RAM handles thousands of daily executions without issues. The only variable cost is storage if you keep extensive execution logs.

The math gets worse at scale. Double your automations on Zapier and your bill doubles. Double them on a VPS and nothing changes. For indie hackers running multiple side projects or AI developers chaining dozens of LLM calls, self-hosting pays for itself within the first month.

How does self-hosting protect your data?

When you use Zapier or Make, every piece of data flowing through your workflows passes through their servers. API keys, customer emails, database credentials, webhook payloads. All of it lives on infrastructure you do not control, usually in US data centers.

Self-hosting means:

  • Your API keys and database passwords never leave your VPS. They are stored in n8n's encrypted database, on disk you control.
  • Execution logs, input/output payloads, and error details live in your local PostgreSQL instance. Not on someone else's infrastructure.
  • You choose the data center. Host in Frankfurt, Amsterdam, or Paris. Your data stays in the EU if GDPR compliance matters to you.
  • No vendor employee can view your workflow configurations or execution history.

This is not theoretical. If you connect your Stripe account to Zapier, Zapier stores your Stripe API key on their infrastructure. With self-hosted n8n, that key exists only in your encrypted database on your VPS.

If you handle customer data or payment information, you probably already know which option you prefer. n8n vs Zapier vs Make: Cost, Privacy and GDPR Compared

What about control and flexibility?

  • Run workflows every second if you need to. No vendor-imposed minimum intervals, no task caps.
  • Write JavaScript or Python directly inside your workflow when a pre-built integration does not exist.
  • Connect to Ollama running on the same server. Your prompts and model responses never leave your machine.
  • Export workflows as JSON, store them in Git, deploy across environments.
  • If n8n disappears tomorrow, your workflows are JSON files you can read and adapt.

The tradeoff is real. Self-hosting means you handle updates, backups, security, and uptime yourself. There is no support team to call at 2 AM. Backup and Update n8n in Production (Docker Compose + PostgreSQL)

Which open-source workflow automation tool should you use?

The self-hosted workflow automation space has four serious contenders: n8n, Activepieces, Windmill, and Automatisch.

n8n Activepieces Windmill Automatisch
License Sustainable Use (fair-code) MIT AGPLv3 AGPLv3
Core integrations 400+ (4,000+ with community) 450+ 100+ (code-first) 80+
Visual editor Yes Yes Yes (+ code IDE) Yes
AI nodes 70+ (LangChain-based) MCP support, AI agents Python/TS scripts Limited
Self-host difficulty Docker Compose, 10 min Docker Compose, 10 min Docker, 3 min Docker Compose, 10 min
Best for General automation + AI Business ops, non-technical users Developer-heavy teams Simple Zapier replacement

n8n works for both non-technical users and developers. Business users build workflows in the drag-and-drop editor. Developers drop into JavaScript or Python code nodes when they hit limits. The AI integration goes deeper than any other open-source automation tool.

With over 400 built-in integrations and thousands of community-contributed nodes, n8n covers most common services out of the box. When it does not, the HTTP Request node and Code node let you connect to anything with an API.

The fair-code license under the Sustainable Use License means the source code is open and you can self-host for free. The restriction: you cannot offer n8n as a hosted service to others. If you are self-hosting for your own projects or your company's internal use, there is no cost.

That said, n8n is not the only good option:

  • Activepieces has a proper MIT license (fully open source) and better MCP support for AI agents. If licensing matters to you or you need MCP server integration, look there first.
  • Windmill is better for developer teams that prefer writing TypeScript or Python over dragging nodes. If your workflows are mostly code, Windmill's script-first approach will feel more natural.
  • Be aware that n8n's integration count inflates when you include community nodes. The core 400+ are well-maintained. Community nodes vary in quality.

For most people reading this, n8n is the right starting point. Largest community, most tutorials available, and it handles both visual and code-based workflow building well. See Install n8n with Docker Compose on a VPS.

What does n8n do?

n8n is a workflow automation platform you run on your own server. You build workflows in a visual editor by connecting nodes. Each node performs one action: read from a database, call an API, transform data, send a message, or run AI inference.

Here is what n8n 2.x (current stable, released December 2025) gives you:

  • Drag-and-drop visual editor. Connect nodes with wires. Test individual nodes or run the full workflow.
  • 400+ built-in integrations: Slack, GitHub, PostgreSQL, Google Sheets, Stripe, Notion, and more.
  • Code nodes for JavaScript or Python when you need custom logic. Full access to npm packages.
  • Webhook triggers that expose HTTP endpoints for instant workflow execution. No polling delays.
  • Built-in retry logic, error workflows, and execution logging.
  • Human-in-the-loop (new in 2.0): require human approval before an AI agent executes specific tools. Useful for production AI workflows where you do not want full autonomy.
  • Save vs. Publish (new in 2.0): edit workflows without affecting the live version. Publish when ready.
  • Task Runners (default in 2.0): code nodes run in isolated processes. A runaway script cannot crash your entire n8n instance.

How do n8n's AI workflow nodes work?

n8n ships over 70 AI-specific nodes built on the LangChain framework. This is what separates it from a simple Zapier clone.

The AI node system is hierarchical:

  • Agent nodes receive a prompt, decide which tools to call, and chain multiple steps together.
  • Model nodes connect to LLM providers: OpenAI (GPT-4o), Anthropic (Claude), Google Gemini, Mistral, or local models through Ollama.
  • Memory nodes keep conversation context across workflow executions using window buffers or summary buffers.
  • Tool nodes give the agent abilities: search the web, query a database, call an API, read a document.
  • Vector store nodes connect to Pinecone, Qdrant, or Supabase for retrieval-augmented generation (RAG).

A practical example: AI customer support. Email arrives, triggers the workflow, agent node classifies the intent using Claude, pulls relevant docs from a vector store, drafts a response, and queues it for human review with the human-in-the-loop feature. One visual workflow. No code beyond the prompt templates.

The Ollama integration matters most for self-hosters. Run Llama 3, Mistral, or Phi locally on the same VPS. No API calls, no per-token billing, no data leaving your infrastructure. Pair that with a European VPS and your AI automation pipeline stays under your control.

You can also mix models in the same workflow. Use a cheap local model for classification (is this email a complaint or a question?) and route only the complex cases to Claude or GPT-4o. This keeps API costs low while still getting high-quality output where it matters. Build AI Workflows in n8n with Ollama and Claude on a VPS

What do you need to run n8n on a VPS?

Running n8n requires a VPS, Docker, and a domain name. The setup takes about 10 minutes with Docker Compose.

VPS requirements:

  1. 2 vCPU minimum, 4 vCPU recommended. More cores help when running multiple workflows concurrently or using AI nodes.
  2. 4 GB RAM minimum, 8 GB recommended. n8n itself uses around 500 MB. PostgreSQL and heavy workflows use the rest. Add more if running Ollama on the same server.
  3. 20 GB storage minimum. Execution logs grow over time. Plan for 40-80 GB if you keep logs long-term.
  4. Ubuntu 24.04 LTS or Debian 12. Both work well with Docker.
  5. A public IPv4 address and a domain name pointing to it (A record).

Software stack:

  • Docker and Docker Compose to run n8n and PostgreSQL as containers
  • Nginx as a reverse proxy with TLS termination (Let's Encrypt)
  • PostgreSQL as the database backend (SQLite works for testing but not production)

A VPS with 4 vCPU and 8 GB RAM handles hundreds of workflows and thousands of daily executions without issues. That covers most indie hackers and small teams. Docker Compose for Multi-Service VPS Deployments

If you plan to run Ollama alongside n8n for local AI inference, consider 16 GB RAM or more depending on model size. A 7B parameter model needs around 4-6 GB of RAM on its own.

The tradeoffs of self-hosting

Self-hosting is not free in the "zero effort" sense. You trade SaaS subscription costs for operational responsibility.

You own these:

When SaaS might be the better choice:

  • You run fewer than 5 simple workflows and the free tier covers your usage
  • Your team has zero Linux experience and no interest in learning
  • You need 24/7 vendor support with SLA guarantees
  • You need integrations with niche enterprise apps that only Zapier supports (Zapier has 7,000+ integrations vs n8n's 400+)

For everyone else, the math favors self-hosting. The rest of this series walks you through every step.

Where do you go from here?


Looking for a VPS to run n8n? Virtua Cloud offers VPS plans for workflow automation with dedicated vCPU, NVMe storage, and European data centers. Docker comes pre-supported.