Image by Author
Â
#Â Introduction
Â
Building your own local AI hub gives you the freedom to automate tasks, process private data, and create custom assistants, all without depending on the cloud or having to deal with monthly fees. In this article, I will walk you through building a self-hosted AI workflow hub on a home server, giving you complete control, greater privacy, and powerful automation.
We will combine tools like Docker for packaging software, Ollama to run local machine learning models, n8n to create visual automations, and Portainer for easy management. This setup is perfect for a moderately powerful x86-64 system like a mini-PC or an old desktop with at least 8GB of RAM, which can capably handle several services at once.
Â
#Â Why Build a Local AI Hub?
Â
When you self-host your tools, you move from being a user of services to an owner of infrastructure, and that is powerful. A local hub is private (your data never leaves your network), cost-effective (no application programming interface (API) fees), and fully customizable.
The core of this hub is a powerful group of items where:
- Ollama serves as your private, on-device AI brain, running models for text generation and analysis
- n8n acts as the nervous system, connecting Ollama to other apps (like calendars, email, or files) to build automated workflows
- Docker is the foundation, packaging each tool into separate, easy-to-manage containers
Â
//Â Core Components of Your Self-Hosted AI Hub
Â
| Tool | Primary Role | Key Benefit for Your Hub |
|---|---|---|
| Docker/Portainer | Containerization & management | Isolates apps, simplifies deployment, and provides a visual management dashboard |
| Ollama | Local large language model (LLM) server | Runs AI models locally for privacy; provides an API for other tools to use |
| n8n | Workflow automation platform | Visually connects Ollama to other services (APIs, databases, files) to create powerful automations |
| Nginx Proxy Manager | Secure access & routing | Provides a secure web gateway to your services with easy SSL certificate setup |
Â
#Â Preparing Your Server Foundation
Â
First, ensure your server is ready. We recommend a clean installation of Ubuntu Server LTS or a similar Linux distribution. Once installed, connect to your server via secure shell (SSH). The first and most critical step is installing Docker, which will run all our subsequent tools.
Â
//Â Installing Docker and Docker Compose
Run the following commands in your terminal to install Docker and Docker Compose. Docker Compose is a tool that lets you define and manage multi-container applications with a simple YAML file.
sudo apt update && sudo apt upgrade -y
sudo apt install apt-transport-https ca-certificates curl software-properties-common -y
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io docker-compose-plugin -y
Â
//Â Verifying and Setting Permissions
Verify the installation and add your user to the Docker group to run commands without sudo:
sudo docker version
sudo usermod -aG docker $USER
Â
Output:
Â
Â
You will need to log out and then log in again for this to take effect.
Â
//Â Managing with Portainer
Instead of using only the command line, we will deploy Portainer, a web-based graphical user interface (GUI) for managing Docker. Create a directory for it and a docker-compose.yml file with the following command.
mkdir -p ~/portainer && cd ~/portainer
nano docker-compose.yml
Â
Paste the following configuration into the file. This tells Docker to download the Portainer image, restart it automatically, and expose its web interface on port 9000.
services:
portainer:
image: portainer/portainer-ce:latest
container_name: portainer
restart: unless-stopped
ports:
- "9000:9000"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- portainer_data:/data
volumes:
portainer_data:
Â
Save the file (Ctrl+X, then Y, then Enter). Now, deploy Portainer:
Â
Your output should look like this:
Â
Â
Now navigate to http://YOUR_SERVER_IP:9000 in your browser. For me, it is http://localhost:9000
Â
Â
You may need to restart the server. You can do that with the following command:
sudo docker start portainer
Â
Create an admin account:
Â
Â
And you will see the Portainer dashboard after creating an account.
Â
Â
This is your mission control for all other containers. You can start, stop, view logs, and manage every other service from here.
Â
#Â Installing Ollama: Your Local AI Engine
Â
Ollama is a tool designed to easily run open-source large language models (LLMs) like Llama 3.2 or Mistral locally. It provides a simple API that n8n and other apps can use.
Â
//Â Deploying Ollama with Docker
While Ollama can be installed directly, using Docker ensures consistency. Create a new directory and a docker-compose.yml file for it with the following command.
mkdir -p ~/ollama && cd ~/ollama
nano docker-compose.yml
Â
Use this configuration. The volumes line is important because it stores your downloaded machine learning models persistently, so you don’t lose them if the container restarts.
services:
ollama:
image: ollama/ollama:latest
container_name: ollama
restart: unless-stopped
ports:
- "11434:11434"
volumes:
- ollama_data:/root/.ollama
volumes:
ollama_data:
Â
Deploy it: docker compose up -d
Â
//Â Pulling and Running Your First Model
Once the container is running, you can pull a model. Let’s start with a capable but efficient model like Llama 3.2.
This command executes ollama pull llama3.2 inside the running container:
docker exec -it ollama ollama pull llama3.2
Â
Task Demonstration: Querying Ollama
You can now interact with your local AI directly. The following command sends a prompt to the model running inside the container.
Â
Â
docker exec -it ollama ollama run llama3.2 "Write a short haiku about technology."
Â
You should see a generated poem in your terminal. More importantly, Ollama’s API is now available at http://YOUR_SERVER_IP:11434 for n8n to use.
Â
Â
#Â Integrating n8n for Intelligent Automation
Â
n8n is a visual workflow automation tool. You can drag and drop nodes to create sequences; for example, “When I save a document, summarize it with Ollama, then send the summary to my notes app.”
Â
//Â Deploying n8n with Docker
Create a directory for n8n. We will use a Compose file that includes a database for n8n to save your workflows and execution data.
mkdir -p ~/n8n && cd ~/n8n
nano docker-compose.yml
Â
Now paste the following inside the YAML file:
services:
n8n:
image: n8nio/n8n:latest
container_name: n8n
restart: unless-stopped
ports:
- "5678:5678"
environment:
- N8N_PROTOCOL=http
- WEBHOOK_URL=http://YOUR_SERVER_IP:5678/
- N8N_ENCRYPTION_KEY=your_secure_encryption_key_here
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=db
- DB_POSTGRESDB_PORT=5432
- DB_POSTGRESDB_DATABASE=n8n
- DB_POSTGRESDB_USER=n8n
- DB_POSTGRESDB_PASSWORD=your_secure_db_password
volumes:
- n8n_data:/home/node/.n8n
depends_on:
- db
db:
image: postgres:17-alpine
container_name: n8n_db
restart: unless-stopped
environment:
- POSTGRES_USER=n8n
- POSTGRES_PASSWORD=your_secure_db_password
- POSTGRES_DB=n8n
volumes:
- postgres_data:/var/lib/postgresql/data
volumes:
n8n_data:
postgres_data:
Â
Replace YOUR_SERVER_IP and the placeholder passwords. Deploy with docker compose up -d. Access n8n at http://YOUR_SERVER_IP:5678.
Task Demonstration: Building Your First AI Workflow
Let’s create a simple workflow where n8n uses Ollama to act as a creative writing assistant.
- In the n8n editor, add a “Schedule Trigger” node and set it to run manually for testing
- Add an “HTTP Request” node. Configure it to call your Ollama API:
- Method: POST
- URL: http://ollama:11434/api/generate
- Set Body Content Type to JSON
- In the JSON body, enter: {“model”: “llama3.2”, “prompt”: “Generate three ideas for a sci-fi short story.”}
- Add a “Set” node to extract just the text from Ollama’s JSON response. Set the value to
{{ $json["response"] }} - Add a “Code” node and use a simple line like
items = [{"json": {"story_ideas": $input.item.json}}]; return items;to format the data - Finally, connect an “Email Send” node (configured with your email service) or a “Save to File” node to output the results
Click “Execute Workflow.” n8n will send the prompt to your local Ollama container, receive the ideas, and process them. You’ve just built a private, automated AI assistant.
Â
#Â Securing Your Hub with Nginx Proxy Manager
Â
You now have services on different ports (Portainer: 9000, n8n: 5678). Nginx Proxy Manager (NPM) lets you access them via neat subdomains (like portainer.home.net) with free secure sockets layer (SSL) encryption from Let’s Encrypt.
Â
//Â Deploying Nginx Proxy Manager
Create a final directory for NPM.
mkdir -p ~/npm && cd ~/npm
nano docker-compose.yml
Â
Paste the following code in your YAML file:
services:
app:
image: 'jc21/nginx-proxy-manager:latest'
container_name: nginx-proxy-manager
restart: unless-stopped
ports:
- '80:80'
- '443:443'
- '81:81'
volumes:
- ./data:/data
- ./letsencrypt:/etc/letsencrypt
volumes:
data:
letsencrypt:
Â
Deploy with docker compose up -d.
The admin panel is at http://YOUR_SERVER_IP:81. Log in with the default credentials (admin@example.com / changeme) and change them immediately.
Â
Â
Task Demonstration: Securing n8n Access
- In your home router, forward ports 80 and 443 to your server’s internal internet protocol (IP) address. This is the only port forwarding required
- In NPM’s admin panel (your-server-ip:81), go to Hosts -> Proxy Hosts -> Add Proxy Host
Â
Â
- For n8n, fill in the details:
- Domain: n8n.yourdomain.com (or a subdomain you own pointing to your home IP)
- Scheme: http
- Forward Hostname / IP: n8n (Docker’s internal network resolves the container name!)
- Forward Port: 5678
- Click SSL and request a Let’s Encrypt certificate, forcing SSL
You can now securely access n8n at https://n8n.yourdomain.com. Repeat for Portainer (portainer.yourdomain.com forwarding to portainer:9000).
Â
#Â Conclusion
Â
You now have a fully functioning, private AI automation hub. Your next steps could be:
- Expanding Ollama: Experiment with different models like Mistral for speed or codellama for programming tasks
- Advanced n8n Workflows: Connect your hub to external APIs (Google Calendar, Telegram, RSS feeds) or internal services (like a local file server)
- Monitoring: Add a tool like Uptime Kuma (also deployable via Docker) to monitor the status of all your services
This setup turns your modest hardware into a powerful, private digital brain. You control the software, own the data, and pay no ongoing fees. The skills you’ve learned managing containers, orchestrating services, and automating with AI are the foundation of modern, independent tech infrastructure.
Â
//Â Further Reading
Â
Â
Shittu Olumide is a software engineer and technical writer passionate about leveraging cutting-edge technologies to craft compelling narratives, with a keen eye for detail and a knack for simplifying complex concepts. You can also find Shittu on Twitter.
