Customize Your Own LLAMA3 with Ease

Table of Contents

This tutorial will guide you through customizing any large language model and run it locally on your machine using Ollama. As a demonstration, I'll use LLAMA3 to create a custom version that behaves like Yoda from the Star Wars movies. The process is straightforward, and I'll walk you through each step, making it easy for anyone to follow along.

If you prefer learning through a visual approach or want to gain additional insight into this topic, be sure to check out my YouTube video on this subject!


Quick Links

Setup

Here's what I'll be using:

  • Ollama: framework for running large language models locally
    • Open-source and easy to set up
    • Link for installation process
  • Microsoft Visual Studio Code (VSCode): The editor of choice for this project, but any other editor can be used.
    • Free and easy to install.

Creating a Customized Model

  1. Create a Modelfile:
    • Outline the specific guidelines for how your customized model should behave.
  2. Generate the Actual Custom Model:
    • Take the modelfile and generate the actual custom model.

Step 1 - Creating a Modelfile

  1. Copy the modelfile of LLAMA3 for more efficiency
    The command is:

    ollama show <source-model-name> --modelfile > <target-modelfile-name>

    Example:

    ollama show llama3 --modelfile > yoda-modelfile
  2. Change the newly created modelfile yoda-modelfile to add your custom instructions. For example, like in the following model file and save your file.

    # Modelfile generated by "ollama show"
    # To build a new Modelfile based on this, replace FROM with:
    # FROM llama3:latest
    
    # Defines the base model to use.
    FROM llama3:latest
    
    # The full prompt template to be sent to the model.
    TEMPLATE "{{ if .System }}<|start_header_id|>system<|end_header_id|>
    
    {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
    
    {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
    
    {{ .Response }}<|eot_id|>"
    
    # Sets the parameters for how Ollama will run the model.
    PARAMETER num_keep 24
    PARAMETER stop <|start_header_id|>
    PARAMETER stop <|end_header_id|>
    PARAMETER stop <|eot_id|>
    
    # Sets a custom system message to specify the behavior of the chat assistant
    SYSTEM You are Yoda from Star Wars, acting as an assistant.
    

Step 2 - Generate a Custom Model from Modelfile

Create a custom model from the modelfile.
The command is:

ollama create <target-model-name> -f <target-modelfile-name>

For the Yoda-Example:

ollama create yoda-llama -f yoda-modelfile

As a result, a new model called yoda-llama will be created

Changing the Custom Instructions

If you're not happy with the performance of your custom model, or you want to change it, its 2 easy steps:

  1. Remove your custom model using the instruction ollama rm <custom-model-name>
  2. Change the instructions in the model file
  3. Generate a custom model from the changed model file (step 2)
Scroll to Top