Skip to content

Instantly share code, notes, and snippets.

@aymuos15
Last active August 31, 2024 14:20
Show Gist options
  • Save aymuos15/fc1d084f2da9ddb2f3588d4d856cfa1b to your computer and use it in GitHub Desktop.
Save aymuos15/fc1d084f2da9ddb2f3588d4d856cfa1b to your computer and use it in GitHub Desktop.
Simple way to run llama3 locally

On Linux

  1. Create a Python virtial environment.

     - python3 -m venv ollama
    
  2. To install

     - Open Terminal
     - run: curl -fsSL https://ollama.com/install.sh | sh
    
  3. Download model of your choice:

      - ollama pull 'MODEL_NAME' #llama3 (Works on 4GB VRAM)
    
  4. Create the following script and save test.py

import ollama

# Construct the prompt
full_prompt = f"{context_prompt} What is a blob loss?"

# Generate a response
response = ollama.generate(model='llama3', prompt=full_prompt)

print(response['response'])
  1. cd into your terminal and run python test.py
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment