Unlock the power of Open WebUI from anywhere with Ollama! Learn how to access this powerful AI chatbot platform on your phone, tablet, or any device. Get started with remote access, no coding required, and unleash the potential of large language models for free.
In this tutorial, I will show you how to run a large language model privately and for free on your local machine, giving you the power to access it from anywhere in the world. We'll be using Ollama, Docker, Open WebUI, and Ngrok.
If you prefer learning through a visual approach or want to gain additional insight into this topic, be sure to check out my YouTube video on this subject!
Quick Links
- Ollama
- Open WebUI
- GitHub Page
- Installation Commands
- How to install Open WebUI
- Localhost URL for OpenWebUI: http://localhost:3000
- Docker
- Ngrok
What You'll Need
- Ollama: A framework that lets you run large language models locally on your machine.
- Docker: Software that enables applications to run within containers, allowing them to function regardless of your operating system.
- Open WebUI: An open-source tool making it simpler to interact with large language models.
- Ngrok: A software that gives your local web applications a public URL.
Installation Steps
Step 1: Install Ollama
- Download Ollama from the official Ollama website.
- Verify that Ollama has been successfully installed by looking for the Lama Icon in the menu bar.
If you need more detailed install-instructions, checkout the page on how to install Ollama locally.
Step 2: Install Docker
- Visit the Docker Website and download the application for your operating system.
- Complete the setup process, accepting the user agreement and using the recommended settings.
- Verify that Docker is running by noticing the new Docker icon on your menu bar.
Step 3: Install Open WebUI
- Head over to the Open WebUI repository on GitHub.
- Copy the installation command from the README file and paste it into the Terminal.
- Confirm that Open WebUI has been successfully installed by navigating to
localhost:3000
in your web browser.
For more detailled install-instruction, check out the page on how to install Open WebUI
Step 4: Install Ngrok
- Sign up for a free account on ngrok.com.
- Follow the installation instructions for your operating system. For Mac OS, enter the command
brew install ngrok/ngrok/ngrok
into your terminal app to trigger the installation - Authenticate your local machine using the authentication code provided during sign-up. The command is
ngrok config add-authtoken <YOUR AUTHENTICATION CODE FROM NGROK>
Access Ollama from Your Phone via Open WebUI
- Use Ngrok to create a public URL that allows you to access Open WebUI remotely. You will find the command to do that on their "Setup & Installation" page. However, you need to adapt the port to the port of the app you're running. For example, Open WebUI is running on
localhost:3000
, so you must change the port in the Ngrok command to3000
. Specifically:- The command given on the Ngrok page is
ngrok http http://localhost:8080
- To forward the Open WebUI app, you must change this command to
ngrok http http://localhost:3000
- The command given on the Ngrok page is
- Copy the forwarding URL that Ngrok gives you in the terminl
- Enter the forwarding URL into a new browser window on your phone or any device with internet access.
- Log in with your user credentials and start interacting with the large language model using Open WebUI.
That's it! With these steps, you should be able to access your own large language models privately and for free from anywhere in the world.