Tutorials

Tutorial

AI Log Analysis with Ollama on a VPS: Detect Anomalies with a Local LLM

Build a production-ready AI log analysis pipeline on your VPS. Query Loki for logs, classify anomalies with a local LLM via Ollama, and route alerts to Discord or Slack using a Python script and systemd timer.

13 min readRead
Tutorial

Build a Self-Healing VPS with Prometheus and Ollama

Wire Prometheus alerts to a local LLM that diagnoses failures and executes safe remediation actions on your VPS. Full working code with allowlists, dry-run mode, and human approval controls.

12 min readRead
Tutorial

Build AI Workflows in n8n with Ollama and Claude on a VPS

Connect n8n to AI models through two paths: Ollama for free local inference and the Claude API for cloud intelligence. Build a content classification workflow with both, running on a self-hosted VPS.

13 min readRead
Tutorial

How to Configure Nginx as a Reverse Proxy

Configure Nginx as a reverse proxy for Node.js, Ollama, and other backends. Covers proxy_pass, header forwarding, WebSocket support, upstream load balancing, and production timeout tuning.

11 min readRead