ollama

Install and run Ollama on Lightning AI

If you’re new to running local large language models like Mistral or LLaMA, you might want to try Ollama – it’s super easy and flexible. Plus, it’s completely free because it’s open-source!   But what if your computer doesn’t have enough power to run Ollama smoothly? Or maybe you’d rather not install another tool on […]

Install and run Ollama on Lightning AI Read More »

How to Set Up Ollama and Open WebUI for Remote Access: Your Personal Assistant on the Go!

Unlock the power of Open WebUI from anywhere with Ollama! Learn how to access this powerful AI chatbot platform on your phone, tablet, or any device. Get started with remote access, no coding required, and unleash the potential of large language models for free. In this tutorial, I will show you how to run a

How to Set Up Ollama and Open WebUI for Remote Access: Your Personal Assistant on the Go! Read More »

The Ultimate Guide to Building AI Agents on Google Colab for Free

In this tutorial, we’re going to build AI agents on Google Colab with the help of LLaMA3 powered by Ollama and CrewAI. With these powerful tools, you can automate tasks that would normally take hours or days, such as preparing for meetings, generating customized trip plans, or writing blogs and social media posts. The good

The Ultimate Guide to Building AI Agents on Google Colab for Free Read More »

Running Ollama on Google Colab: A Step-by-Step Guide

Struggling to run large language models locally due to limited GPU resources? Discover how to effortlessly execute Ollama models on Google Colab’s powerful cloud infrastructure. In this tutorial, I’ll guide you through the entire process, from setting up your Colab environment to running your first Ollama model. No need for expensive hardware – let Colab

Running Ollama on Google Colab: A Step-by-Step Guide Read More »

Unlocking Local Coding Power: Setting Up Ollama with CodeGPT in VSCode

In this tutorial I’ll show you how to set up your own local LLAMA3 copilot using CodeGPT and Ollama in Visual Studio Code. If you prefer learning through a visual approach or want to gain additional insight into this topic, be sure to check out my YouTube video on this subject! Quick Links Ollama Ollama

Unlocking Local Coding Power: Setting Up Ollama with CodeGPT in VSCode Read More »

Importing Quantized Models in GGUF Format: A Beginner’s Guide to Ollama

In this tutorial I’ll demonstrate how to import any large language model from Huggingface and run it locally on your machine using Ollama, specifically focusing on GGUF files. As an example, I’ll use the CapybaraHermes model from "TheBloke". If you prefer learning through a visual approach or want to gain additional insight into this topic,

Importing Quantized Models in GGUF Format: A Beginner’s Guide to Ollama Read More »

Say Goodbye to ChatGPT and Run Your Own AI Locally

Tired of relying on commercial AI platforms and dealing with tedious CLI commands? Want to run powerful Large Language Models (LLMs) right on your own computer, with an intuitive and interactive chat interface similar to ChatGPT? Learn how to unlock this capability with Open WebUI for Ollama, in this step-by-step guide! If you prefer learning

Say Goodbye to ChatGPT and Run Your Own AI Locally Read More »

Scroll to Top