LonelyNathan

Run n8n with Ollama locally.
Your private AI automation workflow builder.

All downloads | GitHub | Requires Docker
LonelyNathan screenshot showing workflow editor

Why Use This?

Everything you need to run AI-powered n8n workflows locally.

No cloud, no subscriptions

Run n8n and local AI models entirely on your machine, free of charge.

Privacy by default

Your workflows and data never leave your computer.

One-click setup

No manual Docker commands, no config files to edit — just download and run.

Local LLMs included

Ships with Ollama pre-configured, so AI-powered workflows work out of the box.

Use your own models

Pull any model from Ollama's library or connect to LM Studio, llama.cpp, or a host Ollama instance.

Full community edition

Runs the official n8n Docker image with no integrations removed or disabled.

Telemetry off by default

n8n diagnostics and version notifications are disabled to match the local-first philosophy.

AI Models

Local LLMs ready to power your workflows.

On first launch, the llama3.2:3b model (~2 GB) is automatically downloaded. Install additional models via Tools > Models menu or connect to host-installed LM Studio, llama.cpp, or an Ollama instance.