Self-Host SigNoz or OpenObserve on a VPS: Datadog Alternatives Compared
Install both SigNoz and OpenObserve on a single VPS using Docker Compose. Compare real resource usage, features, and production hardening to pick the right Datadog alternative.
Datadog charges per host, per GB, per feature. For a single VPS running a few services, the bill adds up fast. SigNoz and OpenObserve are two open-source observability platforms you can self-host on the same server you already pay for. No per-host fees. No data caps. Full control over your telemetry data.
This guide installs both tools on one VPS, measures their actual resource consumption, and gives you a data-driven framework to pick between them.
What are SigNoz and OpenObserve?
SigNoz is an open-source, OpenTelemetry-native observability platform that combines logs, metrics, and traces in a single UI. It uses ClickHouse as its storage backend. SigNoz runs as a multi-container Docker Compose stack. Expect a 1.5-2 GB RAM baseline at idle, more under load. It targets teams that need distributed tracing and APM without per-host SaaS pricing.
OpenObserve is a Rust-based observability platform focused on logs first, with metrics and traces support added over time. It ships as a single binary (one container) and can run on as little as 512 MB RAM. Storage uses the local filesystem by default, with optional S3-compatible backends. It targets solo developers and small teams who need log aggregation at scale without the overhead.
Both accept data via OpenTelemetry Protocol (OTLP), which means you can swap backends without re-instrumenting your applications.
What is OpenTelemetry and why does it matter here?
OpenTelemetry (OTel) is a vendor-neutral standard for collecting telemetry data: traces, metrics, and logs. Applications instrumented with OTel send data in OTLP format to any compatible backend. Both SigNoz and OpenObserve speak OTLP. This means you instrument your app once, then point it at whichever backend you prefer. If you later switch from OpenObserve to SigNoz (or vice versa), you change one endpoint URL. No code changes in your application. This is the main advantage of building on OpenTelemetry rather than vendor-specific agents.
Prerequisites
You need a VPS with at least 4 vCPU and 8 GB RAM to run either tool alongside your existing workloads. A Virtua Cloud VCS-8 (4 vCPU, 8 GB RAM, NVMe storage) works well for this. Both tools need significant disk I/O, so NVMe storage matters.
Install Docker and Docker Compose on a fresh Debian 12 or Ubuntu 24.04 server:
sudo apt update && sudo apt install -y ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
Add the Docker repository (Debian example):
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Install Docker Engine and the Compose plugin:
sudo apt update && sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Verify the installation:
docker --version
docker compose version
You should see Docker 27.x or newer and Docker Compose v2.x or newer. If either command fails, Docker did not install correctly.
Add your non-root user to the docker group so you do not need sudo for every Docker command:
sudo usermod -aG docker $USER
newgrp docker
Docker in Production on a VPS: What Breaks and How to Fix It
How do I install SigNoz on a VPS with Docker Compose?
SigNoz ships a ready-made Docker Compose file that deploys ClickHouse, ZooKeeper, the SigNoz frontend/backend, and an OpenTelemetry Collector. Clone the repository and start the stack. At version v0.116.1, the stack uses ClickHouse 25.5.6 and includes a bundled OTel Collector v0.144.2.
Clone the SigNoz repository:
git clone -b main https://github.com/SigNoz/signoz.git
cd signoz/deploy/docker
Start the stack in detached mode:
docker compose up -d --remove-orphans
This pulls several images (ClickHouse, ZooKeeper, SigNoz, OTel Collector). First startup takes 2-5 minutes depending on your connection speed.
Verify SigNoz is running
Check all containers are healthy:
docker compose ps
You should see containers named signoz, signoz-clickhouse, signoz-zookeeper-1, signoz-otel-collector, and signoz-telemetrystore-migrator. The signoz and signoz-clickhouse containers should show healthy status. Two init containers (signoz-init-clickhouse and the migrator) run briefly during startup and then exit.
Test the web UI responds:
curl -s -o /dev/null -w "%{http_code}" http://localhost:8080
A 200 response confirms the SigNoz dashboard is reachable. Open http://YOUR_SERVER_IP:8080 in your browser and create your admin account. This step is required before the OTel Collector fully activates its OTLP receivers.
After creating the admin account, test the OTel Collector is accepting data:
curl -s -o /dev/null -w "%{http_code}" http://localhost:4318/v1/traces -X POST -H "Content-Type: application/json" -d '{"resourceSpans":[]}'
A 200 response confirms the collector is listening. If you get a connection reset, the admin account has not been created yet. Complete the initial setup in the web UI first.
SigNoz ports
| Port | Protocol | Purpose |
|---|---|---|
| 8080 | HTTP | Web dashboard |
| 4317 | gRPC | OTLP receiver (traces, metrics, logs) |
| 4318 | HTTP | OTLP receiver (traces, metrics, logs) |
How do I install OpenObserve on a VPS with Docker Compose?
OpenObserve runs as a single container. No ZooKeeper. No ClickHouse. One binary, one data directory. At version v0.70.0, the Docker image is around 430 MB. Create a Docker Compose file and start it with generated credentials.
Generate a strong password for the admin account:
OPENOBSERVE_PASSWORD=$(openssl rand -base64 32)
echo "OpenObserve admin password: $OPENOBSERVE_PASSWORD"
Save this password somewhere safe. You will need it to log in.
Create a project directory and Compose file:
mkdir -p ~/openobserve && cd ~/openobserve
cat > docker-compose.yaml << 'COMPOSE'
services:
openobserve:
image: public.ecr.aws/zinclabs/openobserve:v0.70.0
container_name: openobserve
restart: unless-stopped
ports:
- "5080:5080"
- "5081:5081"
environment:
ZO_ROOT_USER_EMAIL: "${ZO_ROOT_USER_EMAIL}"
ZO_ROOT_USER_PASSWORD: "${ZO_ROOT_USER_PASSWORD}"
ZO_DATA_DIR: "/data"
volumes:
- openobserve-data:/data
volumes:
openobserve-data:
COMPOSE
Create an environment file with restricted permissions:
cat > .env << EOF
ZO_ROOT_USER_EMAIL=admin@$(hostname -f)
ZO_ROOT_USER_PASSWORD=${OPENOBSERVE_PASSWORD}
EOF
chmod 600 .env
Start OpenObserve:
docker compose up -d
Verify OpenObserve is running
Check the container status:
docker compose ps
You should see openobserve with status Up.
Test the API responds:
curl -s -o /dev/null -w "%{http_code}" http://localhost:5080/healthz
A 200 confirms OpenObserve is healthy. Open http://YOUR_SERVER_IP:5080 in your browser and log in with the email and password from your .env file.
OpenObserve ports
| Port | Protocol | Purpose |
|---|---|---|
| 5080 | HTTP | Web UI + API + OTLP HTTP receiver |
| 5081 | gRPC | OTLP gRPC receiver |
How do I send OpenTelemetry data to SigNoz?
SigNoz bundles its own OpenTelemetry Collector. Your applications send telemetry to it on ports 4317 (gRPC) or 4318 (HTTP). If you run applications on the same VPS, point them to localhost:4317.
For applications on other servers, configure an OpenTelemetry Collector on those machines to forward data. Here is an example collector config that sends traces, metrics, and logs to your SigNoz instance:
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
processors:
batch:
timeout: 5s
send_batch_size: 1000
exporters:
otlp/signoz:
endpoint: YOUR_SIGNOZ_IP:4317
tls:
insecure: true
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [otlp/signoz]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [otlp/signoz]
logs:
receivers: [otlp]
processors: [batch]
exporters: [otlp/signoz]
Replace YOUR_SIGNOZ_IP with your VPS IP address. The insecure: true setting disables TLS between the remote collector and SigNoz. For production, set up TLS first (see the hardening section below).
Quick test with curl
Send a test log entry directly to the SigNoz OTel Collector:
curl -X POST http://localhost:4318/v1/logs \
-H "Content-Type: application/json" \
-d '{
"resourceLogs": [{
"resource": {"attributes": [{"key": "service.name", "value": {"stringValue": "test-service"}}]},
"scopeLogs": [{
"logRecords": [{
"timeUnixNano": "'$(date +%s)000000000'",
"body": {"stringValue": "Hello from test"},
"severityText": "INFO"
}]
}]
}]
}'
After sending, open the SigNoz UI at port 8080, navigate to Logs, and verify the test entry appears. If it does not show up within 30 seconds, check the collector logs:
docker logs signoz-otel-collector --tail 50
How do I send logs to OpenObserve?
OpenObserve accepts data via its built-in OTLP endpoints. No separate collector needed. Configure your applications or an external OTel Collector to send to OpenObserve directly.
First, generate base64-encoded credentials for the Authorization header:
CREDS=$(echo -n "admin@$(hostname -f):${OPENOBSERVE_PASSWORD}" | base64 -w 0)
echo "Authorization header: Basic $CREDS"
OTel Collector config for OpenObserve
exporters:
otlphttp/openobserve:
endpoint: http://YOUR_OPENOBSERVE_IP:5080/api/default
headers:
Authorization: "Basic YOUR_BASE64_CREDENTIALS"
service:
pipelines:
logs:
receivers: [otlp]
processors: [batch]
exporters: [otlphttp/openobserve]
traces:
receivers: [otlp]
processors: [batch]
exporters: [otlphttp/openobserve]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [otlphttp/openobserve]
Replace YOUR_OPENOBSERVE_IP and YOUR_BASE64_CREDENTIALS with your actual values. The endpoint path must not have a trailing slash.
Quick test with curl
Send a test log entry:
curl -X POST http://localhost:5080/api/default/v1/logs \
-H "Content-Type: application/json" \
-H "Authorization: Basic $(echo -n "admin@$(hostname -f):${OPENOBSERVE_PASSWORD}" | base64 -w 0)" \
-d '{
"resourceLogs": [{
"resource": {"attributes": [{"key": "service.name", "value": {"stringValue": "test-service"}}]},
"scopeLogs": [{
"logRecords": [{
"timeUnixNano": "'$(date +%s)000000000'",
"body": {"stringValue": "Hello from test"},
"severityText": "INFO"
}]
}]
}]
}'
Check the OpenObserve UI at port 5080. Navigate to Logs and search for the test entry. If nothing appears, check container logs:
docker logs openobserve --tail 50
How much RAM does SigNoz use compared to OpenObserve?
This is the question that drives most decisions between these two tools. Here are measurements taken on a Virtua Cloud VCS-8 (4 vCPU, 8 GB RAM, NVMe) running Debian 12, with each tool deployed individually.
Measure resource usage with docker stats --no-stream:
docker stats --no-stream --format "table {{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}"
Resource comparison at idle (no telemetry flowing)
| Metric | SigNoz (full stack) | OpenObserve (single container) |
|---|---|---|
| RAM usage | ~1.6 GB (ClickHouse ~775 MB, ZooKeeper ~775 MB, SigNoz ~50 MB, OTel Collector ~35 MB) | ~310 MB |
| CPU usage | 2-4% | <1% |
| Disk (images) | ~2.7 GB | ~430 MB |
| Container count | 4 running + 2 init (exit after startup) | 1 |
Resource comparison under moderate load (~1,000 log lines/sec, 100 traces/sec)
| Metric | SigNoz | OpenObserve |
|---|---|---|
| RAM usage | ~3.4 GB | ~650 MB |
| CPU usage | 15-25% | 5-10% |
| Disk growth (24h) | ~4.2 GB | ~1.1 GB |
Sharp eyes: notice SigNoz uses roughly 5x the RAM and 4x the disk of OpenObserve for the same workload. This is the cost of ClickHouse, which provides fast columnar queries on structured trace data. OpenObserve achieves lower storage through aggressive compression (they claim 140x vs Elasticsearch, and the numbers above support significant compression).
On an 8 GB VPS, SigNoz leaves you around 6 GB for your actual applications at idle. OpenObserve leaves you over 7.5 GB. Under load, the gap widens. If your VPS also runs application workloads, this difference matters.
How to reproduce these measurements
Run each tool individually (not both at the same time) to get clean numbers. Start the stack, wait 5 minutes for initialization to settle, then measure idle usage. For loaded measurements, use a log generator like flog to produce synthetic logs, then feed them to the OTel Collector via its HTTP endpoint.
Measure with docker stats --no-stream every 5 minutes over a 1-hour window, then average the results. Disk growth is measured with docker system df -v at the start and end of a 24-hour test window.
Feature comparison: SigNoz vs OpenObserve
| Feature | SigNoz | OpenObserve |
|---|---|---|
| Logs | Yes (ClickHouse-backed, SQL queries) | Yes (primary focus, full-text search) |
| Metrics | Yes (ClickHouse-backed, PromQL support) | Yes (built-in) |
| Traces | Yes (full distributed tracing, service maps) | Yes (basic trace support) |
| APM | Yes (latency, error rates, throughput per service) | Limited (no full APM dashboards) |
| Alerting | Yes (built-in, PagerDuty/Slack/webhook) | Yes (built-in, multi-channel) |
| Dashboards | Yes (built-in) | Yes (built-in) |
| Query language | SQL (ClickHouse), PromQL for metrics | SQL, PromQL for metrics |
| Auth | Built-in (email/password, SSO in enterprise) | Built-in (email/password, RBAC) |
| Storage backend | ClickHouse (local disk) | Local disk, S3-compatible |
| Default retention | Logs/traces: 7 days, Metrics: 30 days | Configurable, no hard default |
| OpenTelemetry | Native (OTLP gRPC + HTTP) | Supported (OTLP gRPC + HTTP) |
| License | MIT (core), Enterprise features paid | AGPL v3 (core), Enterprise paid |
SigNoz has deeper tracing and APM capabilities. If you run microservices and need flame graphs, service dependency maps, and p99 latency tracking, SigNoz is the stronger choice.
OpenObserve handles logs better per resource spent. If you primarily need log aggregation and search with some metrics on top, OpenObserve gives you more headroom on a single VPS.
How do I secure SigNoz and OpenObserve with TLS?
Neither tool should be exposed directly to the internet without TLS and proper authentication. Use Nginx as a reverse proxy with Let's Encrypt certificates for both UIs.
How to Configure Nginx as a Reverse Proxy Set Up Let's Encrypt SSL/TLS for Nginx on Debian 12 and Ubuntu 24.04
Install Nginx and Certbot
sudo apt install -y nginx certbot python3-certbot-nginx
Nginx config for SigNoz
Create /etc/nginx/sites-available/signoz:
server {
listen 80;
server_name signoz.example.com;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
Nginx config for OpenObserve
Create /etc/nginx/sites-available/openobserve:
server {
listen 80;
server_name openobserve.example.com;
location / {
proxy_pass http://127.0.0.1:5080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
Enable both sites and obtain TLS certificates:
sudo ln -s /etc/nginx/sites-available/signoz /etc/nginx/sites-enabled/
sudo ln -s /etc/nginx/sites-available/openobserve /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx
Verify the config test passes before reloading. Then request certificates:
sudo certbot --nginx -d signoz.example.com
sudo certbot --nginx -d openobserve.example.com
Certbot modifies the Nginx config files to add TLS. Verify it worked:
curl -s -o /dev/null -w "%{http_code}" https://signoz.example.com
curl -s -o /dev/null -w "%{http_code}" https://openobserve.example.com
Both should return 200.
Hide version information
Edit /etc/nginx/nginx.conf and add inside the http block:
server_tokens off;
This prevents Nginx from disclosing its version in HTTP headers. Version disclosure helps attackers target known vulnerabilities.
Bind services to localhost only
After setting up the reverse proxy, restrict SigNoz and OpenObserve to only accept connections from localhost. This prevents direct access bypassing Nginx.
For SigNoz, edit signoz/deploy/docker/docker-compose.yaml and change the port binding:
ports:
- "127.0.0.1:8080:8080"
For OpenObserve, edit ~/openobserve/docker-compose.yaml:
ports:
- "127.0.0.1:5080:5080"
- "127.0.0.1:5081:5081"
Restart both stacks after the change:
cd ~/signoz/deploy/docker && docker compose up -d
cd ~/openobserve && docker compose up -d
Firewall rules
Allow only SSH, HTTP, and HTTPS by default. Only open OTLP ports if you receive telemetry from remote servers, and restrict them to known source IPs:
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow ssh
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw enable
If you need to receive OTLP data from remote application servers, do not open the ports to the entire internet. Restrict access to specific source IPs:
sudo ufw allow from 203.0.113.10 to any port 4317 comment 'OTLP gRPC from app-server-1'
sudo ufw allow from 203.0.113.10 to any port 4318 comment 'OTLP HTTP from app-server-1'
Replace 203.0.113.10 with your actual application server IP. Repeat for each server that sends telemetry.
If you use OpenObserve's OTLP endpoints from remote servers, also allow port 5081 (gRPC) from those specific IPs. For HTTP ingestion, route it through Nginx instead of exposing port 5080 directly.
Verify the firewall rules:
sudo ufw status verbose
How to Set Up a Linux VPS Firewall with UFW and nftables
Production hardening
Docker resource limits
Prevent either tool from consuming all VPS memory by setting limits in Docker Compose.
For SigNoz, add to the signoz service and clickhouse service in the Compose file:
deploy:
resources:
limits:
memory: 512M
# For clickhouse service
deploy:
resources:
limits:
memory: 2G
For OpenObserve:
deploy:
resources:
limits:
memory: 1G
These limits prevent runaway memory usage from crashing your VPS. Adjust based on your workload.
Retention policies
SigNoz defaults to 7 days for logs/traces and 30 days for metrics. Change these in the SigNoz UI under Settings > General > Retention. For a single VPS, 7 days for logs and 15 days for metrics keeps disk usage manageable.
OpenObserve retention is configured via environment variables. Add to your .env:
ZO_COMPACT_DATA_RETENTION_DAYS=7
Restart OpenObserve after changing retention settings.
Backups
For SigNoz, back up the ClickHouse data volume:
docker compose stop clickhouse
sudo mkdir -p /backup
sudo tar czf /backup/signoz-clickhouse-$(date +%Y%m%d).tar.gz \
-C /var/lib/docker/volumes/ $(docker volume inspect signoz-clickhouse -f '{{.Name}}')
docker compose start clickhouse
For OpenObserve, back up the data volume:
docker compose stop openobserve
sudo mkdir -p /backup
sudo tar czf /backup/openobserve-data-$(date +%Y%m%d).tar.gz \
-C /var/lib/docker/volumes/ $(docker volume inspect openobserve_openobserve-data -f '{{.Name}}')
docker compose start openobserve
Logging and monitoring
Check service logs with journalctl or docker logs:
# SigNoz components
docker logs signoz --tail 100
docker logs signoz-clickhouse --tail 100
docker logs signoz-otel-collector --tail 100
# OpenObserve
docker logs openobserve --tail 100
Follow logs in real time:
docker logs -f signoz-otel-collector
Set up systemctl enable --now docker to ensure Docker starts on boot. The restart: unless-stopped policy in Docker Compose handles container restarts. The enable makes Docker survive reboots, --now starts it immediately.
Should I choose SigNoz or OpenObserve for my VPS?
Pick the tool that matches your primary need. Do not choose based on feature count alone. Choose based on what you will actually use daily.
Choose SigNoz if:
- You run microservices and need distributed tracing with service maps and flame graphs.
- Your primary debugging workflow involves correlating traces with logs and metrics.
- You have 8 GB+ RAM to spare for the observability stack alone.
- Your team already uses or plans to use OpenTelemetry instrumentation heavily.
- You need PromQL-compatible metrics querying.
Choose OpenObserve if:
- Your primary need is log aggregation and search.
- You run on a VPS with limited RAM (4-8 GB total, shared with applications).
- You want the simplest possible deployment (one container, one data directory).
- You plan to use S3-compatible object storage for long-term retention.
- You are a solo developer or small team that does not need full APM.
Choose neither (stay with Datadog/New Relic) if:
- You need zero operational overhead and your budget supports it.
- Your compliance requirements mandate a SOC 2 certified vendor.
- You do not have time to maintain self-hosted infrastructure.
How much does self-hosted observability cost compared to Datadog?
The primary cost of self-hosting is the VPS itself. There are no per-host fees, no per-GB ingestion charges, and no feature-gated pricing.
Datadog uses a per-host pricing model for infrastructure monitoring and APM, plus per-GB charges for log ingestion and per-event charges for log indexing. These costs stack quickly for small teams. Check the Datadog pricing page for current rates. A typical setup with 5 hosts and 10 GB/day of logs can easily reach hundreds of euros per month across infrastructure monitoring, APM, and log management combined.
With self-hosting on a Virtua Cloud VCS-8, your total cost is the VPS itself. All monitoring, APM, log ingestion, and retention are included at no extra charge. You pay with your time instead: updates, backups, troubleshooting. For a solo developer or small team that already manages a VPS, this tradeoff usually makes sense.
For teams with 20+ hosts and dedicated SRE staff, managed services may cost less in total engineering hours. For 1-10 hosts, self-hosting saves hundreds of euros per month.
New Relic offers a free tier with limited data ingest that works for very small setups. Check their pricing page for current limits. If your data volume fits within the free tier, the managed option beats self-hosting in operational overhead. Once you exceed it, pricing ramps quickly and self-hosting becomes attractive again.
AIOps on a VPS: AI-Driven Server Management with Open-Source Tools
Something went wrong?
SigNoz containers keep restarting
Check ClickHouse health first. It is the most common failure point:
docker logs signoz-clickhouse --tail 100
If you see out-of-memory errors, ClickHouse needs more RAM. Either increase Docker memory limits or reduce max_memory_usage in the ClickHouse config.
OpenObserve returns 401 on API calls
The credentials in your Authorization header must be base64-encoded as email:password. Verify:
echo -n "your-email:your-password" | base64 -w 0
Compare the output with what you are sending. The -w 0 flag prevents base64 from inserting line breaks in the output. Without it, long credentials get split across multiple lines, which breaks the Authorization header.
OTLP data is not showing up
- Verify the collector or application can reach the target port:
curl http://TARGET_IP:4318/v1/traces - Check firewall rules:
sudo ufw status - Check if the port is bound to localhost only (127.0.0.1) when it should accept remote connections
- Read the collector logs for rejected or dropped spans
Ports are not reachable from outside
If you set port bindings to 127.0.0.1:PORT:PORT, those ports are intentionally localhost-only. Remote OTLP collectors need to reach the ingestion ports. You have two options: open specific ports in ufw and bind them to 0.0.0.0, or route everything through the Nginx reverse proxy (preferred for HTTP, not practical for gRPC).
For gRPC OTLP ingestion from remote hosts, bind the OTLP ports to all interfaces and protect them with firewall rules that only allow your known application server IPs:
sudo ufw allow from 203.0.113.10 to any port 4317 comment 'OTLP gRPC from app-server-1'
High disk usage
Check which tool is consuming space:
docker system df -v
Reduce retention periods or enable compression. For SigNoz, ClickHouse compresses data automatically but retention is the main lever. For OpenObserve, check the ZO_COMPACT_DATA_RETENTION_DAYS setting.
Copyright 2026 Virtua.Cloud. All rights reserved. This content is original work by the Virtua.Cloud team. Reproduction, republication, or redistribution without written permission is prohibited.
Ready to try it yourself?
Deploy your own server in seconds. Linux, Windows, or FreeBSD. →