Chat
Ask me anything
Ithy Logo

Unlock Your Local AI: Securely Access Ollama & Open WebUI Across Your Network

Set up Ollama and Open WebUI on your desktop and safely share them with other devices on your home or office network.

setup-ollama-open-webui-local-network-gjsrst5c

Running powerful Large Language Models (LLMs) locally offers significant advantages in terms of privacy, cost, and offline capability. Ollama provides the engine to run open-source LLMs on your hardware, while Open WebUI offers a user-friendly, ChatGPT-like interface to interact with them. This guide details how to install both on your desktop and configure them so that the Ollama API and the Open WebUI interface are accessible only to other computers and devices connected to your local network (LAN), keeping them secure from the public internet.

Key Steps for Local Network Access

Highlights of the Setup Process

  • Install Core Tools: Set up Ollama to run LLMs locally and Open WebUI (preferably via Docker) for the user interface.
  • Configure Network Binding: Adjust settings for both Ollama and Open WebUI so they listen for connections from your local network IP addresses, not just the local machine (localhost).
  • Adjust Firewall Rules: Modify your desktop's firewall settings to allow incoming connections from your local network devices to the specific ports used by Ollama and Open WebUI, while blocking external access.

Step 1: Installing Ollama

Getting the Local LLM Engine Running

Ollama is the foundation that allows you to download and run various open-source LLMs directly on your computer.

Download and Install

First, download and install Ollama for your operating system (macOS, Linux, Windows) from the official source.

  • Visit the Ollama Official Website.
  • Follow the installation instructions provided. For Linux or macOS, a common command is:
    curl -fsSL https://ollama.com/install.sh | sh
  • On Windows, download and run the installer executable.

Once installed, the Ollama service should start automatically and run in the background. By default, it listens only on 127.0.0.1 (localhost) on port 11434.

Pull a Test Model

To ensure Ollama is working correctly, open your terminal or command prompt and pull a model, such as Llama 3:

ollama pull llama3

After the download completes, you can run it locally using:

ollama run llama3

This confirms the basic Ollama installation is functional before proceeding with network configuration.


Step 2: Configuring Ollama for Local Network (LAN) Access

Making the Ollama API Reachable Within Your Network

To allow Open WebUI (and potentially other applications on your LAN) to communicate with the Ollama API running on your desktop, you need to change its network binding from localhost to an address accessible on your local network. Setting it to listen on 0.0.0.0 is the most common way, which means it will accept connections on all available network interfaces, including your LAN IP address.

Using the OLLAMA_HOST Environment Variable

The recommended method is to set the OLLAMA_HOST environment variable.

  • Linux/macOS:

    You can set this temporarily in your current terminal session:

    export OLLAMA_HOST=0.0.0.0

    For a permanent setting, add this line to your shell profile file (e.g., ~/.bashrc, ~/.zshrc) and then run source ~/.bashrc or restart your terminal. If Ollama runs as a systemd service (common on Linux), you'll need to edit the service file:

    1. Edit the service unit: sudo systemctl edit ollama.service
    2. Add the following lines under the [Service] section:
      [Service]
      Environment="OLLAMA_HOST=0.0.0.0"
    3. Save the file, then reload and restart the service:
      sudo systemctl daemon-reload
      sudo systemctl restart ollama
  • Windows:

    Open Command Prompt as Administrator and run:

    setx OLLAMA_HOST 0.0.0.0 /M

    Alternatively, you can set it via the System Properties > Environment Variables interface. You'll need to restart the Ollama application or potentially your computer for the change to take effect.

After configuration, restart the Ollama service or application. It should now be listening on 0.0.0.0:11434, making it accessible via your desktop's local network IP address (e.g., http://192.168.1.100:11434).


Step 3: Installing and Configuring Open WebUI

Setting Up the User Interface

Open WebUI provides a web-based interface to interact with your locally running Ollama models. Using Docker and Docker Compose is highly recommended as it simplifies installation, dependency management, and configuration.

Prerequisites

Ensure you have Docker and Docker Compose installed. You can get them from the official Docker website.

Using Docker Compose (Recommended)

This method sets up both Ollama and Open WebUI in containers, simplifying network configuration.

  1. Create a new directory for your project (e.g., my-local-ai).
  2. Inside this directory, create a file named docker-compose.yml.
  3. Paste the following configuration into docker-compose.yml:
    version: '3.8'
    
    services:
      ollama:
        image: ollama/ollama:latest
        container_name: ollama
        volumes:
          - ollama_data:/root/.ollama
        # Expose Ollama API to the host machine's network
        # The OLLAMA_HOST variable makes it listen on all interfaces inside the container network
        # The ports mapping exposes it specifically to the host machine's network interfaces
        ports:
          - "11434:11434"
        environment:
          - OLLAMA_HOST=0.0.0.0
        # If you have an NVIDIA GPU and want acceleration:
        # deploy:
        #   resources:
        #     reservations:
        #       devices:
        #         - driver: nvidia
        #           count: 1
        #           capabilities: [gpu]
        restart: unless-stopped
    
      open-webui:
        image: ghcr.io/open-webui/open-webui:main
        container_name: open-webui
        depends_on:
          - ollama
        ports:
          # Exposes Open WebUI on port 3000 of your host machine, accessible from your LAN
          # Format: "HOST_PORT:CONTAINER_PORT" -> "3000:8080"
          # To restrict access only to the host machine (not LAN), use "127.0.0.1:3000:8080"
          - "3000:8080"
        environment:
          # Points WebUI to the Ollama service within the Docker network
          - OLLAMA_API_BASE_URL=http://ollama:11434
          # Optional: Set WEBUI_HOST to 0.0.0.0 if needed, though port mapping usually handles LAN exposure
          # - WEBUI_HOST=0.0.0.0
        volumes:
          - webui_data:/app/backend/data
        restart: unless-stopped
    
    volumes:
      ollama_data:
      webui_data:
    
    
  4. Open a terminal in the directory containing docker-compose.yml and run:
    docker-compose up -d
    This command downloads the necessary images and starts the Ollama and Open WebUI containers in the background.

With this setup:

  • Ollama runs inside a container, listening on port 11434, which is mapped to port 11434 on your host machine, accessible across your LAN because of the OLLAMA_HOST=0.0.0.0 setting and the port mapping.
  • Open WebUI runs in another container, accessible on port 3000 of your host machine via your LAN IP address (e.g., http://192.168.1.100:3000). It connects internally to the Ollama container using the service name (http://ollama:11434).

Alternative: Manual Installation (Without Docker)

If you prefer not to use Docker, you can install Open WebUI directly:

  1. Ensure you have Python and pip installed.
  2. Install Open WebUI: pip install open-webui
  3. Run Open WebUI, specifying the host and port to allow LAN access:
    open-webui serve --host 0.0.0.0 --port 8080
    (You can choose a different port if 8080 is occupied).
  4. Access the WebUI via http://<your-desktop-LAN-IP>:8080.
  5. In the Open WebUI settings, configure the Ollama API Base URL to point to your desktop's Ollama instance: http://<your-desktop-LAN-IP>:11434.

Remember that your manually installed Ollama must also be configured to listen on 0.0.0.0 (as per Step 2).


Step 4: Firewall Configuration

Allowing Local Network Connections

Your desktop's operating system likely has a firewall enabled that blocks incoming connections by default. You need to create rules to allow other devices on your local network to connect to the ports used by Ollama and Open WebUI.

The ports you need to open for *inbound* connections from your local network are:

  • Port 11434 (TCP): For the Ollama API.
  • Port 3000 (TCP): For Open WebUI (if using the Docker Compose example above) or the port you chose (e.g., 8080 if manually run).

Firewall Rule Guidelines

  • Windows (Windows Defender Firewall):
    1. Open "Windows Defender Firewall with Advanced Security".
    2. Go to "Inbound Rules" and click "New Rule...".
    3. Select "Port" and click Next.
    4. Select "TCP" and "Specific local ports:". Enter 11434, 3000 (or your WebUI port). Click Next.
    5. Select "Allow the connection". Click Next.
    6. Ensure only "Private" is checked (to allow only from your local network). Uncheck "Public". Click Next.
    7. Give the rule a name (e.g., "Ollama and WebUI LAN Access") and click Finish.
  • Linux (using ufw - common on Ubuntu):
    sudo ufw allow from 192.168.1.0/24 to any port 11434 proto tcp
    sudo ufw allow from 192.168.1.0/24 to any port 3000 proto tcp
    Replace 192.168.1.0/24 with your actual local network's IP range. If unsure, allowing all local access might be simpler but slightly less specific:
    sudo ufw allow 11434/tcp
    sudo ufw allow 3000/tcp
    Ensure ufw is enabled: sudo ufw enable.
  • macOS (Application Firewall):
    1. Go to System Settings > Network > Firewall.
    2. Click "Options...".
    3. If the Ollama application or the Docker process requests incoming connections, allow them. You might need to add applications manually using the '+' button if prompted. Ensure "Block all incoming connections" is OFF and "Automatically allow signed software..." is ON. Fine-grained port control is less straightforward here; often, allowing the application itself handles it for local networks.

Important: Only allow connections from your private/local network. Do NOT create rules that allow connections from "Public" networks or "Any" IP address if the option distinguishes between local and external sources, as this could expose your services to the internet.


Visualizing the Setup

Understanding the Network Flow

This mindmap illustrates how devices on your local network interact with Open WebUI and Ollama running on your desktop machine.

mindmap root["Local AI Setup on Desktop"] id1["Desktop Machine
(e.g., 192.168.1.100)"] id1a["Ollama Service"] id1a1["Listens on 0.0.0.0:11434"] id1a2["Accessible via LAN IP
http://192.168.1.100:11434"] id1a3["Runs LLMs (e.g., Llama 3)"] id1b["Open WebUI Service"] id1b1["Listens on 0.0.0.0:3000 (Host Port)"] id1b2["Accessible via LAN IP
http://192.168.1.100:3000"] id1b3["Connects to Ollama
(http://ollama:11434 or http://LAN_IP:11434)"] id1c["Firewall"] id1c1["Allows incoming LAN traffic
on Ports 11434 & 3000"] id1c2["Blocks external traffic"] id2["Other Devices on LAN
(e.g., Laptop, Phone)"] id2a["Access Open WebUI via Browser
http://192.168.1.100:3000"] id2b["(Optional) Access Ollama API directly
http://192.168.1.100:11434"]

Step 5: Accessing Services from Your Local Network

Connecting from Other Devices

Once configured, you can access Open WebUI and the Ollama API from any device connected to the same local network.

  1. Find Your Desktop's Local IP Address:
    • Windows: Open Command Prompt and type ipconfig. Look for the "IPv4 Address" under your active network adapter (Wi-Fi or Ethernet). It usually looks like 192.168.x.x or 10.x.x.x.
    • macOS: Go to System Settings > Network, select your active connection (Wi-Fi or Ethernet), and find the IP Address listed. Or, open Terminal and type ipconfig getifaddr en0 (for Ethernet) or ipconfig getifaddr en1 (for Wi-Fi, might vary).
    • Linux: Open Terminal and type ip addr show or hostname -I. Look for the IP address associated with your main network interface (e.g., eth0, wlan0).
  2. Access Open WebUI:

    On another device (laptop, tablet, phone) connected to the same Wi-Fi or LAN, open a web browser and navigate to:

    http://<Your_Desktop_IP>:3000

    (Replace <Your_Desktop_IP> with the actual IP address you found, e.g., http://192.168.1.100:3000). You should see the Open WebUI login or main interface.

  3. (Optional) Access Ollama API Directly:

    You can also test the Ollama API directly from another device using tools like curl or a browser:

    http://<Your_Desktop_IP>:11434

    Or, to list available models:

    http://<Your_Desktop_IP>:11434/api/tags

    This confirms the API is reachable over the network.


Setup Considerations

Comparing Approaches

Choosing between Docker and manual installation, and understanding the implications of network configuration, involves trade-offs. This chart highlights key aspects:

The Docker Compose method generally offers easier setup, better portability, and handles dependencies well, potentially using slightly more resources due to containerization. Manual setup provides more fine-grained control but requires managing dependencies and configurations directly.


Configuration Summary Table

Key Settings at a Glance

This table summarizes the crucial configuration parameters for enabling LAN access while maintaining local-only security.

Component Parameter Recommended Value for LAN Access Purpose
Ollama OLLAMA_HOST (Environment Variable) 0.0.0.0 Makes Ollama listen on all network interfaces (including LAN IP).
Ollama Default Port 11434 (TCP) Port the Ollama API listens on. Needs firewall access from LAN.
Open WebUI (Docker) Port Mapping (in docker-compose.yml) "3000:8080" Maps container port 8080 to host port 3000, making WebUI accessible on host's LAN IP at port 3000.
Open WebUI (Docker) OLLAMA_API_BASE_URL (Environment Variable) http://ollama:11434 Internal Docker network address for WebUI to reach Ollama service.
Open WebUI (Manual) Command Line Argument --host 0.0.0.0 Makes the manually run WebUI listen on all network interfaces.
Open WebUI (Manual) Command Line Argument --port 8080 (or custom) Port the manual WebUI listens on. Needs firewall access from LAN.
Open WebUI (Manual) Ollama API Base URL (in WebUI Settings) http://<Your_Desktop_IP>:11434 Tells manual WebUI where to find the Ollama API on the network.
Firewall Inbound Rules Allow TCP ports 11434 and 3000 (or WebUI port) Permit connections *only* from the local network (Private profile / specific subnet). Block Public access.

Diagram showing RAG application architecture

Conceptual diagram related to using local LLMs, potentially in applications like Retrieval-Augmented Generation (RAG).

Setting up Ollama and Open WebUI locally opens possibilities for various AI-driven tasks and applications within your private network, such as building internal chatbots or document analysis tools, as hinted by advanced use cases like RAG shown above.


Step-by-Step Video Guide

Visual Installation Walkthrough

For a visual guide on installing Ollama and Open WebUI, this video demonstrates the process, focusing on getting the tools running locally. While it might not specifically cover the LAN exposure steps detailed here, it provides a helpful overview of the initial installation.

This video provides a walkthrough of installing Ollama and Open WebUI, covering the basics of getting models like Llama 3.1 running locally.


Security Considerations

Keeping Your Local AI Secure

  • LAN Only Exposure: The core principle of this guide is to expose services *only* to your trusted local network. Double-check firewall rules to ensure they block connections from outside your LAN (Public networks). Using 0.0.0.0 binds to all interfaces, but the firewall is your gatekeeper.
  • Open WebUI Authentication: The first account created in Open WebUI typically gets administrator privileges. It's recommended to enable user registration and potentially require admin approval in the WebUI settings to control who can access the interface, even within your LAN.
  • Avoid Public Exposure: Never directly expose Ollama or Open WebUI ports to the public internet without robust security measures like a VPN, reverse proxy with authentication (e.g., Nginx, Traefik), or cloud tunneling services designed for security.
  • Network Segmentation: For enhanced security in sensitive environments, consider placing the desktop running Ollama on a separate network segment (VLAN) accessible only by authorized devices.

Troubleshooting Tips

Resolving Common Issues

  • Connection Refused Errors:
    • Check if both Ollama and Open WebUI services/containers are running.
    • Verify Ollama is listening on 0.0.0.0:11434 (use netstat -tulnp | grep 11434 on Linux/macOS, or netstat -ano | findstr "11434" on Windows).
    • Ensure Open WebUI is configured with the correct Ollama API URL (check Docker Compose environment variable or WebUI settings).
    • Confirm the host firewall allows traffic on the required ports (11434, 3000/8080) from your local network source IP.
  • WebUI Loads but Can't Connect to Models: This usually points to an issue with Open WebUI reaching the Ollama API. Double-check the OLLAMA_API_BASE_URL setting. If using Docker, ensure the containers are on the same Docker network.
  • Slow Performance: LLMs are resource-intensive. Ensure your desktop meets the minimum RAM requirements (often 8GB+, 16GB+ recommended). Check CPU/GPU usage. If using Docker with an NVIDIA GPU, ensure the NVIDIA container toolkit is installed and GPU resources are correctly passed to the Ollama container.
  • Firewall Blocks: Temporarily disable the firewall on the host machine to see if connectivity works. If it does, the firewall rules need adjustment. Remember to re-enable it afterwards with the correct rules.

Frequently Asked Questions (FAQ)

Common Queries About Local Setup

▶ What does binding to 0.0.0.0 mean, and is it secure?

▶ How do I find my desktop's local network IP address?

▶ Can I change the default ports (11434 for Ollama, 3000 for WebUI)?

▶ Do I need Docker to run Ollama and Open WebUI?


Recommended Reads

Explore Further


References

Sources Used

docs.openwebui.com
🏡 Home | Open WebUI
docs.openwebui.com
Features | Open WebUI

Last updated April 29, 2025
Ask Ithy AI
Download Article
Delete Article