Running powerful Large Language Models (LLMs) locally offers significant advantages in terms of privacy, cost, and offline capability. Ollama provides the engine to run open-source LLMs on your hardware, while Open WebUI offers a user-friendly, ChatGPT-like interface to interact with them. This guide details how to install both on your desktop and configure them so that the Ollama API and the Open WebUI interface are accessible only to other computers and devices connected to your local network (LAN), keeping them secure from the public internet.
Ollama is the foundation that allows you to download and run various open-source LLMs directly on your computer.
First, download and install Ollama for your operating system (macOS, Linux, Windows) from the official source.
curl -fsSL https://ollama.com/install.sh | sh
Once installed, the Ollama service should start automatically and run in the background. By default, it listens only on 127.0.0.1
(localhost) on port 11434
.
To ensure Ollama is working correctly, open your terminal or command prompt and pull a model, such as Llama 3:
ollama pull llama3
After the download completes, you can run it locally using:
ollama run llama3
This confirms the basic Ollama installation is functional before proceeding with network configuration.
To allow Open WebUI (and potentially other applications on your LAN) to communicate with the Ollama API running on your desktop, you need to change its network binding from localhost
to an address accessible on your local network. Setting it to listen on 0.0.0.0
is the most common way, which means it will accept connections on all available network interfaces, including your LAN IP address.
The recommended method is to set the OLLAMA_HOST
environment variable.
You can set this temporarily in your current terminal session:
export OLLAMA_HOST=0.0.0.0
For a permanent setting, add this line to your shell profile file (e.g., ~/.bashrc
, ~/.zshrc
) and then run source ~/.bashrc
or restart your terminal. If Ollama runs as a systemd service (common on Linux), you'll need to edit the service file:
sudo systemctl edit ollama.service
[Service]
section:
[Service]
Environment="OLLAMA_HOST=0.0.0.0"
sudo systemctl daemon-reload
sudo systemctl restart ollama
Open Command Prompt as Administrator and run:
setx OLLAMA_HOST 0.0.0.0 /M
Alternatively, you can set it via the System Properties > Environment Variables interface. You'll need to restart the Ollama application or potentially your computer for the change to take effect.
After configuration, restart the Ollama service or application. It should now be listening on 0.0.0.0:11434
, making it accessible via your desktop's local network IP address (e.g., http://192.168.1.100:11434
).
Open WebUI provides a web-based interface to interact with your locally running Ollama models. Using Docker and Docker Compose is highly recommended as it simplifies installation, dependency management, and configuration.
Ensure you have Docker and Docker Compose installed. You can get them from the official Docker website.
This method sets up both Ollama and Open WebUI in containers, simplifying network configuration.
my-local-ai
).docker-compose.yml
.docker-compose.yml
:
version: '3.8'
services:
ollama:
image: ollama/ollama:latest
container_name: ollama
volumes:
- ollama_data:/root/.ollama
# Expose Ollama API to the host machine's network
# The OLLAMA_HOST variable makes it listen on all interfaces inside the container network
# The ports mapping exposes it specifically to the host machine's network interfaces
ports:
- "11434:11434"
environment:
- OLLAMA_HOST=0.0.0.0
# If you have an NVIDIA GPU and want acceleration:
# deploy:
# resources:
# reservations:
# devices:
# - driver: nvidia
# count: 1
# capabilities: [gpu]
restart: unless-stopped
open-webui:
image: ghcr.io/open-webui/open-webui:main
container_name: open-webui
depends_on:
- ollama
ports:
# Exposes Open WebUI on port 3000 of your host machine, accessible from your LAN
# Format: "HOST_PORT:CONTAINER_PORT" -> "3000:8080"
# To restrict access only to the host machine (not LAN), use "127.0.0.1:3000:8080"
- "3000:8080"
environment:
# Points WebUI to the Ollama service within the Docker network
- OLLAMA_API_BASE_URL=http://ollama:11434
# Optional: Set WEBUI_HOST to 0.0.0.0 if needed, though port mapping usually handles LAN exposure
# - WEBUI_HOST=0.0.0.0
volumes:
- webui_data:/app/backend/data
restart: unless-stopped
volumes:
ollama_data:
webui_data:
docker-compose.yml
and run:
docker-compose up -d
This command downloads the necessary images and starts the Ollama and Open WebUI containers in the background.
With this setup:
11434
, which is mapped to port 11434
on your host machine, accessible across your LAN because of the OLLAMA_HOST=0.0.0.0
setting and the port mapping.3000
of your host machine via your LAN IP address (e.g., http://192.168.1.100:3000
). It connects internally to the Ollama container using the service name (http://ollama:11434
).If you prefer not to use Docker, you can install Open WebUI directly:
pip install open-webui
open-webui serve --host 0.0.0.0 --port 8080
(You can choose a different port if 8080 is occupied).
http://<your-desktop-LAN-IP>:8080
.http://<your-desktop-LAN-IP>:11434
.Remember that your manually installed Ollama must also be configured to listen on 0.0.0.0
(as per Step 2).
Your desktop's operating system likely has a firewall enabled that blocks incoming connections by default. You need to create rules to allow other devices on your local network to connect to the ports used by Ollama and Open WebUI.
The ports you need to open for *inbound* connections from your local network are:
11434
(TCP): For the Ollama API.3000
(TCP): For Open WebUI (if using the Docker Compose example above) or the port you chose (e.g., 8080
if manually run).11434, 3000
(or your WebUI port). Click Next.ufw
- common on Ubuntu):
sudo ufw allow from 192.168.1.0/24 to any port 11434 proto tcp
sudo ufw allow from 192.168.1.0/24 to any port 3000 proto tcp
Replace 192.168.1.0/24
with your actual local network's IP range. If unsure, allowing all local access might be simpler but slightly less specific:
sudo ufw allow 11434/tcp
sudo ufw allow 3000/tcp
Ensure ufw
is enabled: sudo ufw enable
.
Important: Only allow connections from your private/local network. Do NOT create rules that allow connections from "Public" networks or "Any" IP address if the option distinguishes between local and external sources, as this could expose your services to the internet.
This mindmap illustrates how devices on your local network interact with Open WebUI and Ollama running on your desktop machine.
Once configured, you can access Open WebUI and the Ollama API from any device connected to the same local network.
ipconfig
. Look for the "IPv4 Address" under your active network adapter (Wi-Fi or Ethernet). It usually looks like 192.168.x.x
or 10.x.x.x
.ipconfig getifaddr en0
(for Ethernet) or ipconfig getifaddr en1
(for Wi-Fi, might vary).ip addr show
or hostname -I
. Look for the IP address associated with your main network interface (e.g., eth0
, wlan0
).On another device (laptop, tablet, phone) connected to the same Wi-Fi or LAN, open a web browser and navigate to:
http://<Your_Desktop_IP>:3000
(Replace <Your_Desktop_IP>
with the actual IP address you found, e.g., http://192.168.1.100:3000
). You should see the Open WebUI login or main interface.
You can also test the Ollama API directly from another device using tools like curl
or a browser:
http://<Your_Desktop_IP>:11434
Or, to list available models:
http://<Your_Desktop_IP>:11434/api/tags
This confirms the API is reachable over the network.
Choosing between Docker and manual installation, and understanding the implications of network configuration, involves trade-offs. This chart highlights key aspects:
The Docker Compose method generally offers easier setup, better portability, and handles dependencies well, potentially using slightly more resources due to containerization. Manual setup provides more fine-grained control but requires managing dependencies and configurations directly.
This table summarizes the crucial configuration parameters for enabling LAN access while maintaining local-only security.
Component | Parameter | Recommended Value for LAN Access | Purpose |
---|---|---|---|
Ollama | OLLAMA_HOST (Environment Variable) |
0.0.0.0 |
Makes Ollama listen on all network interfaces (including LAN IP). |
Ollama | Default Port | 11434 (TCP) |
Port the Ollama API listens on. Needs firewall access from LAN. |
Open WebUI (Docker) | Port Mapping (in docker-compose.yml ) |
"3000:8080" |
Maps container port 8080 to host port 3000, making WebUI accessible on host's LAN IP at port 3000. |
Open WebUI (Docker) | OLLAMA_API_BASE_URL (Environment Variable) |
http://ollama:11434 |
Internal Docker network address for WebUI to reach Ollama service. |
Open WebUI (Manual) | Command Line Argument --host |
0.0.0.0 |
Makes the manually run WebUI listen on all network interfaces. |
Open WebUI (Manual) | Command Line Argument --port |
8080 (or custom) |
Port the manual WebUI listens on. Needs firewall access from LAN. |
Open WebUI (Manual) | Ollama API Base URL (in WebUI Settings) | http://<Your_Desktop_IP>:11434 |
Tells manual WebUI where to find the Ollama API on the network. |
Firewall | Inbound Rules | Allow TCP ports 11434 and 3000 (or WebUI port) |
Permit connections *only* from the local network (Private profile / specific subnet). Block Public access. |
Conceptual diagram related to using local LLMs, potentially in applications like Retrieval-Augmented Generation (RAG).
Setting up Ollama and Open WebUI locally opens possibilities for various AI-driven tasks and applications within your private network, such as building internal chatbots or document analysis tools, as hinted by advanced use cases like RAG shown above.
For a visual guide on installing Ollama and Open WebUI, this video demonstrates the process, focusing on getting the tools running locally. While it might not specifically cover the LAN exposure steps detailed here, it provides a helpful overview of the initial installation.
This video provides a walkthrough of installing Ollama and Open WebUI, covering the basics of getting models like Llama 3.1 running locally.
0.0.0.0
binds to all interfaces, but the firewall is your gatekeeper.0.0.0.0:11434
(use netstat -tulnp | grep 11434
on Linux/macOS, or netstat -ano | findstr "11434"
on Windows).OLLAMA_API_BASE_URL
setting. If using Docker, ensure the containers are on the same Docker network.0.0.0.0
mean, and is it secure?