Skip to content

Instantly share code, notes, and snippets.

@jmftrindade
Last active January 2, 2024 22:04
Show Gist options
  • Save jmftrindade/513a60bb4e63a18c661a3c9028a9c9ef to your computer and use it in GitHub Desktop.
Save jmftrindade/513a60bb4e63a18c661a3c9028a9c9ef to your computer and use it in GitHub Desktop.
Local LLMs with Ollama and Mistral + RAG using PrivateGPT

Local LLMs on Windows using WSL2 (Ubuntu 22.04)

Assumes:

Run powershell as administrator and enter Ubuntu distro.

From within Ubuntu:

sudo apt update && sudo apt upgrade

Local models with Ollama

Model options at https://github.com/jmorganca/ollama

$ curl https://ollama.ai/install.sh | sh
$ ollama run llama2:13b

PrivateGPT

Requires a cmake compiler to build llama2-cpp, and Ubuntu WSL doesn't ship with one:

sudo apt install cmake g++ clang

Requires python3.11, and Ubuntu on WSL ships with 3.10:

sudo apt install python3.11 python3.11-venv
sudo update-alternatives --install /usr/local/bin/python3 python3 /usr/bin/python3.11 10
sudo update-alternatives --install /usr/local/bin/python python /usr/bin/python3.11 10

Now we're ready to follow https://docs.privategpt.dev/overview/welcome/quickstart

git clone https://github.com/imartinez/privateGPT && cd privateGPT
python3.11 -m venv .venv && source .venv/bin/activate
pip install --upgrade pip poetry

This is where it will fail if you don't "upgrade-alternatives" (or symlink) to use Python3.11:

poetry install --with ui,local

Setup a server and start it up:

./scripts/setup
python -m private_gpt

Open browser at http://127.0.0.1:8001 to access privateGPT demo UI.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment