Saturday, March 21

If you’ve ever had a workflow automation platform send your customer data through a third-party cloud you don’t control, you already know why people run an n8n self-hosted setup. The cloud version of n8n is fine for prototyping, but the moment you’re handling PII, API keys, internal credentials, or anything that touches compliance requirements, running it on your own infrastructure stops being optional. This guide covers the full path: Docker deployment, reverse proxy with SSL, authentication hardening, backup strategies, and the failure modes nobody documents until you’re already in production.

Why Self-Host n8n Instead of Using the Cloud Version

The n8n cloud offering starts at $20/month for 5 active workflows and 2,500 executions. That sounds reasonable until you’re running 50 workflows that each trigger dozens of times per day. More importantly, the cloud version processes your workflow data — including webhook payloads, API responses, and credentials — on n8n’s infrastructure. For most enterprise use cases, that’s a non-starter.

Self-hosting gives you:

  • Data sovereignty — execution data never leaves your network
  • Unlimited executions — the only limit is your hardware
  • Custom integrations — install community nodes without restriction
  • Full credential control — secrets stay in your vault, not n8n’s
  • Cost predictability — a $20/month VPS can handle serious workflow volume

The tradeoff is real though: you own the ops burden. Updates, backups, monitoring, SSL renewal — that’s on you. Budget a few hours per month for maintenance if you’re running this seriously.

Infrastructure Requirements Before You Start

Don’t underestimate what n8n actually needs. The minimum for a usable self-hosted instance:

  • 2 vCPUs, 2GB RAM (4GB recommended for concurrent workflow execution)
  • Docker and Docker Compose installed
  • A domain name you control (for SSL — don’t skip this)
  • PostgreSQL 13+ (SQLite works but will cause pain at any real workflow volume)

I’d pick a 4GB RAM VPS on Hetzner or DigitalOcean for most teams — roughly $20-24/month. That handles well over a million executions per month without breaking a sweat. AWS EC2 works fine but costs 2-3x more for equivalent specs without giving you anything meaningful in return for a single-instance setup.

Docker Compose Setup: The Actual Working Config

Here’s a production-ready docker-compose.yml that avoids the common mistakes. The official docs give you the bare minimum; this is what you actually want running:

version: '3.8'

services:
  postgres:
    image: postgres:15-alpine
    restart: always
    environment:
      POSTGRES_DB: n8n
      POSTGRES_USER: n8n
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}  # Never hardcode this
    volumes:
      - postgres_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U n8n"]
      interval: 10s
      timeout: 5s
      retries: 5

  n8n:
    image: n8nio/n8n:latest  # Pin this to a specific version in production
    restart: always
    ports:
      - "127.0.0.1:5678:5678"  # Only bind to localhost — nginx handles external traffic
    environment:
      - DB_TYPE=postgresdb
      - DB_POSTGRESDB_HOST=postgres
      - DB_POSTGRESDB_DATABASE=n8n
      - DB_POSTGRESDB_USER=n8n
      - DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}
      - N8N_HOST=${DOMAIN}
      - N8N_PORT=5678
      - N8N_PROTOCOL=https
      - WEBHOOK_URL=https://${DOMAIN}/
      - N8N_BASIC_AUTH_ACTIVE=true
      - N8N_BASIC_AUTH_USER=${N8N_USER}
      - N8N_BASIC_AUTH_PASSWORD=${N8N_PASSWORD}
      - EXECUTIONS_DATA_PRUNE=true
      - EXECUTIONS_DATA_MAX_AGE=168  # 7 days in hours
      - N8N_ENCRYPTION_KEY=${ENCRYPTION_KEY}  # 32-char random string
    volumes:
      - n8n_data:/home/node/.n8n
    depends_on:
      postgres:
        condition: service_healthy

volumes:
  postgres_data:
  n8n_data:

Two things here that the official docs gloss over: bind n8n to 127.0.0.1, not 0.0.0.0, and set N8N_ENCRYPTION_KEY before you add any credentials. If you add credentials first and then set the key later, they won’t decrypt and you’ll have to re-enter everything. Generate your encryption key with openssl rand -hex 16 and keep it somewhere safe — losing it means losing access to all stored credentials.

Your .env File

POSTGRES_PASSWORD=use_a_real_password_here
DOMAIN=n8n.yourdomain.com
N8N_USER=admin
N8N_PASSWORD=another_strong_password
ENCRYPTION_KEY=your_32_char_hex_string_here

Never commit this file. Add it to .gitignore immediately.

Nginx Reverse Proxy and SSL Configuration

Don’t expose n8n directly. Run it behind nginx with Certbot for SSL. Here’s a complete nginx config:

server {
    listen 80;
    server_name n8n.yourdomain.com;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl http2;
    server_name n8n.yourdomain.com;

    ssl_certificate /etc/letsencrypt/live/n8n.yourdomain.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/n8n.yourdomain.com/privkey.pem;
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers HIGH:!aNULL:!MD5;

    # Required for n8n's SSE-based push updates in the editor
    proxy_buffering off;

    location / {
        proxy_pass http://localhost:5678;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # Increase timeouts for long-running workflows
        proxy_read_timeout 300;
        proxy_connect_timeout 300;
        proxy_send_timeout 300;
    }
}

The proxy_buffering off line is non-obvious but important. Without it, the n8n editor’s real-time execution feedback breaks because nginx buffers the server-sent events. The proxy timeout settings matter too — default nginx timeouts will kill workflows that take more than 60 seconds, which happens constantly with LLM API calls.

Get your cert: certbot --nginx -d n8n.yourdomain.com and set up auto-renewal with a cron job or systemd timer.

Authentication and Access Control Hardening

Basic auth from the docker-compose config above is a start, but it’s not sufficient for a team environment or anything internet-exposed. Here’s the layered approach I’d use:

Layer 1: Network-Level Restriction

If your team uses a VPN or has static IPs, restrict access at the firewall level. On UFW:

# Only allow HTTPS from your office IP and VPN range
ufw allow from 203.0.113.0/24 to any port 443
ufw deny 443  # Block everything else

This alone eliminates the entire public-internet attack surface for the editor UI. Webhooks are different — they often need to be public. If that’s your case, consider putting the editor on a separate subdomain behind IP restriction while leaving webhook endpoints open.

Layer 2: n8n’s Built-In User Management

n8n 0.187+ ships with proper user management (email + password, not just basic auth). Enable it by removing the N8N_BASIC_AUTH_* variables and setting:

- N8N_USER_MANAGEMENT_DISABLED=false
- N8N_EMAIL_MODE=smtp
- N8N_SMTP_HOST=your_smtp_host
- N8N_SMTP_PORT=587
- N8N_SMTP_USER=your_smtp_user
- N8N_SMTP_PASS=${SMTP_PASSWORD}

This gives you per-user access, invite-based onboarding, and proper session management. I’d use this over basic auth for any team setup — basic auth credentials go in every HTTP header and it’s clunky to rotate them.

Layer 3: Fail2Ban for Brute Force Protection

# /etc/fail2ban/jail.local
[n8n]
enabled = true
port = 443
filter = n8n
logpath = /var/log/nginx/access.log
maxretry = 5
bantime = 3600

Five failed attempts gets an IP banned for an hour. Pair this with nginx rate limiting and you’ve got a reasonable brute force posture without enterprise tooling.

Backup Strategy That Actually Works

The two things you need to back up: the PostgreSQL database and the n8n_data volume (which contains your workflow files, custom nodes, and the encryption key if you’re not injecting it via env var).

#!/bin/bash
# /opt/n8n-backup/backup.sh — run daily via cron

DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_DIR="/opt/n8n-backup/files"
S3_BUCKET="s3://your-backup-bucket/n8n"

mkdir -p $BACKUP_DIR

# Dump PostgreSQL
docker exec n8n-postgres-1 pg_dump -U n8n n8n | \
  gzip > "$BACKUP_DIR/n8n_db_$DATE.sql.gz"

# Backup n8n data volume
docker run --rm \
  --volumes-from n8n-n8n-1 \
  -v $BACKUP_DIR:/backup \
  alpine tar czf /backup/n8n_data_$DATE.tar.gz /home/node/.n8n

# Ship to S3 (or any offsite storage)
aws s3 sync $BACKUP_DIR $S3_BUCKET

# Prune local backups older than 7 days
find $BACKUP_DIR -type f -mtime +7 -delete

echo "Backup completed: $DATE"

Run this daily via cron: 0 2 * * * /opt/n8n-backup/backup.sh >> /var/log/n8n-backup.log 2>&1

Test your restores. A backup you’ve never restored is not a backup. Do a restore drill against a staging instance at least once before you depend on this in production.

High Availability and Scaling Considerations

Single-instance n8n is fine for most teams. When it isn’t, n8n supports a queue mode using Redis and multiple worker instances. Here’s when to consider it:

  • You’re running more than ~100 concurrent workflow executions
  • Long-running workflows are blocking time-sensitive ones
  • You need zero-downtime deployments

Queue mode separates the main process (editor UI + webhook handling) from worker processes (execution). Add to your environment:

- EXECUTIONS_MODE=queue
- QUEUE_BULL_REDIS_HOST=redis
- QUEUE_BULL_REDIS_PORT=6379

Then run multiple worker containers with command: worker. Each worker pulls jobs from the Redis queue independently. This is solid architecture — I’ve run this at about 500 concurrent executions on 4 workers without issues.

The real gotcha with queue mode: your workflows need to be stateless. Any workflow that writes to a local file between steps will break when different steps run on different workers. Use external storage (S3, a database) for any state that needs to persist mid-workflow.

Monitoring Your n8n Instance

n8n exposes a health endpoint at /healthz. Point your uptime monitor (UptimeRobot, Better Uptime, or your own Prometheus stack) at https://n8n.yourdomain.com/healthz. It returns 200 when healthy.

For deeper observability, n8n emits execution data that you can ship to your logging stack via the built-in logging config:

- N8N_LOG_LEVEL=info
- N8N_LOG_OUTPUT=file
- N8N_LOG_FILE_LOCATION=/home/node/.n8n/logs/n8n.log

Watch for the error rate on POST /webhook/* paths — spikes there usually mean a third-party service is sending malformed payloads or your workflow has a bug that’s silently failing.

Who Should Self-Host and Who Shouldn’t

The n8n self-hosted setup path is the right call if you’re a technical team handling sensitive data, building LLM-integrated workflows that call internal APIs, or operating at scale where cloud pricing becomes painful. The ops overhead is real but manageable — maybe 2-3 hours/month once you’re stable.

If you’re a solo founder experimenting or a non-technical team that just wants automations to work, start with the cloud version. The productivity cost of owning your own infrastructure isn’t worth it until you hit a concrete reason: compliance requirement, data sensitivity, cost ceiling, or a need for custom nodes that the cloud tier restricts.

For teams building AI agent workflows that call Claude, GPT-4, or internal LLM endpoints — self-hosting is almost always the right call. Those workflows routinely handle API keys, sensitive prompts, and intermediate data you don’t want sitting in someone else’s execution logs.

Editorial note: API pricing, model capabilities, and tool features change frequently — always verify current details on the vendor’s website before building in production. Code examples are tested at time of writing; pin your dependency versions to avoid breaking changes. Some links in this article may be affiliate links — we may earn a commission if you sign up, at no extra cost to you.

Share.
Leave A Reply