This repository provides a complete self-hosted workflow automation and AI environment featuring:
- n8n – visual workflow automation
- Ollama – local LLM model server
- Open-WebUI – browser UI for chatting with Ollama models
- PostgreSQL – persistent database backend for n8n
- Nginx – TLS-terminating reverse proxy for all services
All services run on Ubuntu using Docker Compose, with optional GPU support for Ollama.
.
├── docker-compose.yml # All services: n8n, postgres, ollama, open-webui, nginx
├── nginx/
│ └── default.conf # Nginx reverse proxy config
├── certs/
│ ├── fullchain.pem # SSL certificate
│ └── privkey.pem # SSL private key
├── .env # Environment variables
├── db_data/ # Postgres volume
├── n8n_data/ # n8n persistent data
├── webui_data/ # Open-WebUI data
└── ollama/ # Ollama model and config cache
- Ubuntu 22.04+ host
- Docker Engine + Docker Compose plugin
- IPv4 connectivity on your public interface (IPv6 may be used only for management)
- Valid TLS certificate and key under
certs/(or self-signed for testing)
git clone https://github.com/<yourname>/n8n-ollama-stack.git
cd n8n-mcp-agentsCopy .env.template → .env and edit:
POSTGRES_USER=n8n
POSTGRES_PASSWORD=n8n
POSTGRES_DB=n8n
# Random secret for n8n encryption
N8N_ENCRYPTION_KEY=$(openssl rand -hex 32)
# Timezone and hostname
TZ=America/New_York
N8N_HOST=localhost
WEBHOOK_URL=https://<your-public-hostname>:8443/
MODEL_NAME=llama3docker compose up -dThe first start will pull images and initialize Postgres and Ollama.
| Service | Port | Description / URL |
|---|---|---|
| Nginx reverse proxy | 443 | Entry point for all HTTPS traffic |
| n8n | 5678 | Exposed via Nginx at / → https:/// or directly http://:5678/ |
| Open-WebUI | 8080 | Exposed via Nginx at /webui/ → https:///webui/ or directly http://:8080/ |
| Ollama API | 11434 | Exposed via Nginx at /ollama/ → https:///ollama/ |
| PostgreSQL | 5432 | Local only, used by n8n |
Note: Nginx proxies internally to
127.0.0.1(IPv4). Ensure your host networking prefers IPv4 (see below).
If your host has an NVIDIA GPU and drivers installed:
docker compose up -d ollamaThe ollama service already includes:
deploy:
resources:
reservations:
devices:
- capabilities: [gpu]so Docker will pass GPU devices automatically if available.
docker compose logs -f nginx
docker compose logs -f n8n
docker compose logs -f open-webui
docker compose logs -f ollamaTo test upstream reachability:
curl -vk https://<your-host>/ollama/api/tags
curl -vk https://<your-host>/
curl -vk https://<your-host>/webui/If 502 Bad Gateway appears, verify that:
- Nginx
proxy_passuses127.0.0.1(notlocalhost) - Services are listening on
0.0.0.0or127.0.0.1ports - IPv4 preference (
/etc/gai.conf) is applied
docker compose downTo remove volumes (wipe Postgres/n8n data):
docker compose down -vMIT License © 2025 Komal Thareja