Skip to content

Instantly share code, notes, and snippets.

@mberman84
Created November 9, 2023 15:39
Show Gist options
  • Save mberman84/9b3c281ae5e3e92b7e946f6a09787cde to your computer and use it in GitHub Desktop.
Save mberman84/9b3c281ae5e3e92b7e946f6a09787cde to your computer and use it in GitHub Desktop.
PrivateGPT Installation
# Clone the repo
git clone https://github.com/imartinez/privateGPT
cd privateGPT
# Install Python 3.11
pyenv install 3.11
pyenv local 3.11
# Install dependencies
poetry install --with ui,local
# Download Embedding and LLM models
poetry run python scripts/setup
# (Optional) For Mac with Metal GPU, enable it. Check Installation and Settings section
to know how to enable GPU on other platforms
CMAKE_ARGS="-DLLAMA_METAL=on" pip install --force-reinstall --no-cache-dir llama-cpp-python
# Run the local server
PGPT_PROFILES=local make run
# Note: on Mac with Metal you should see a ggml_metal_add_buffer log, stating GPU is
being used
# Navigate to the UI and try it out!
http://localhost:8001/
@ewebgh33
Copy link

ewebgh33 commented Dec 15, 2023

$env:CMAKE_ARGS='-DLLAMA_CUBLAS=on'; poetry run pip install --force-reinstall --no-cache-dir llama-cpp-python

gives the following error:
The filename, directory name, or volume label syntax is incorrect.

I'm going inane. Every LLM app wants a different package manager, it's own copy of torch inside it's own venv, etc. Install a handfull of LLM tools, and you have 10 copies of torch on your drive.
You can't seem to point these tools at a custom LLM folder, so you have 10 copies of Mistral7b downloaded, each inside the tool's own folder. This is stupid.

@ewebgh33
Copy link

ewebgh33 commented Dec 15, 2023

Right so NOW the command $env:CMAKE_ARGS='-DLLAMA_CUBLAS=on'; poetry run pip install --force-reinstall --no-cache-dir llama-cpp-python
is kicking off a process but
FAILS to build the wheel
And google results keep bringing me back here and another github thread for PrivateGPT, neither of which has a solution to why building the wheel fails.

Why does building the wheel fail? How to fix?
Visual studio 2022 and all cpp stuff installed
MiniGW etc installed
CUDA all OK, nvcc etc blah blah

My BLAS = 0 still

@ewebgh33
Copy link

ewebgh33 commented Dec 16, 2023

I got the llama-cpp--python wheel to build finally, by doing this:
In Anaconda Prompt:

pip uninstall -y llama-cpp-python
set CMAKE_ARGS="-DLLAMA_CUBLAS=on"
set FORCE_CMAKE=1
pip install llama-cpp-python==0.1.57 --no-cache-dir

But my BLAS=0 still
So set CMAKE is doing nothing? Or: why isn't that working even though llama built OK this time?

Edit:
Realised this is a very old version of llama-cpp, that'll teach me to not pay close attention. Tried it again simply with
pip install llama-cpp-python --no-cache-dir
And it installed llama-cpp-python-0.2.24.
Still trying to get BLAS=1 with no success though...

@utkucanaytac
Copy link

The changes below worked for me

llm:
     tokenizer: mistralai/Mistral-7B-Instruct-v0.1
local:
    llm_hf_repo_id: TheBloke/Mistral-7B-Instruct-v0.1-GGUF
    llm_hf_model_file: mistral-7b-instruct-v0.1.Q4_K_M.gguf

PLATFORM: Apple M1
OS: Sonoma 14.2.1

@ommalani
Copy link

I'm new to this, and I could use some assistance. Visual Studio Code is showing an error while trying to install privateGPT, indicating that a wheel for llama-cpp-python is required. However, it seems that my PC lacks NMake, preventing the creation of the wheel. I would appreciate your help in resolving this issue.

@ommalani
Copy link

@mberman84 I'm new to this, and I could use some assistance. Visual Studio Code is showing an error while trying to install privateGPT, indicating that a wheel for llama-cpp-python is required. However, it seems that my PC lacks NMake, preventing the creation of the wheel. I would appreciate your help in resolving this issue.

@vivekkarumudi
Copy link

vivekkarumudi commented Feb 10, 2024

i just automated the step of opening the conda virtual environment where python 3.11.0 was used in my case and opening the url after a 30 second delay via a python script and a bat file . Please use and make changes to suit your own locations ..all credits to chatgpt :-)
Screenshot 2024-02-10 123418
Screenshot 2024-02-10 123453

@zqadir
Copy link

zqadir commented Feb 19, 2024

getting below error:

S D:\privateGPT-main> poetry run python scripts/setup
Traceback (most recent call last):
File "D:\privateGPT-main\scripts\setup", line 8, in
from private_gpt.paths import models_path, models_cache_path
ModuleNotFoundError: No module named 'private_gpt'

@angryansari
Copy link

angryansari commented Feb 25, 2024

https://download.visualstudio.microsoft.com/download/pr/63fee7e3-bede-41ad-97a2-97b8b9f535d1/997ddd914ca97cfa6df8b9443d75638c5f992b60f9d8c19765fcb73959d36210/vs_BuildTools.exe

Visual Studio CMAKE libraries are required to build llama-cpp-python

Re-run
poetry install --with ui,local
once installed to fix No module named 'private_gpt'

@idontknowwhyitfailed
Copy link

idontknowwhyitfailed commented Mar 2, 2024

Windows Subsystem For Linux (WSL) running Ubuntu 22.04 on Windows 11

#Run powershell or cmd as administrator

#install, upgrade and install ubuntu 22.04 LTS
wsl --install -y
wsl --upgrade -y

#install and run ubuntu 22.04 LTS in wsl
wsl --install -d Ubuntu-22.04
give it a username and a simple password

#Setup Ubuntu
sudo apt update --yes
sudo apt upgrade --yes

#install miniconda (its smaller than conda and gets the same functionality...i think)
curl -sSL https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh
export PATH=~/miniconda/bin:$PATH
#persist conda in path so that conda doesn't require the full path to run
echo '#conda' >> ~/.bashrc
echo 'PATH=$PATH:$HOME/miniconda/bin' >> ~/.bashrc
#add a conda initialize to your current bash shell
conda init
#get out of the default (base) environment
conda deactivate

#DOWNLOAD THE privateGPT GITHUB
git clone https://github.com/imartinez/privateGPT
cd privateGPT

#Create the privategpt conda environment
conda create -n privategpt python=3.11 -y
conda activate privategpt

#INSTALL POETRY
curl -sSL https://install.python-poetry.org | python3 -
echo '#Poetry' >> ~/.bashrc
echo 'export PATH=$HOME/.local/bin:$PATH' >> ~/.bashrc
source ~/.bashrc
poetry --version

#install a patched version of llama-cpp-python
#PER - zylon-ai/private-gpt#1584
pip uninstall llama-cpp-python
LLAMA_CUBLAS="1"
FORCE_CMAKE="1"
CMAKE_ARGS="-DLLAMA_CUBLAS=on"
python -m pip install llama-cpp-python --prefer-binary --extra-index-url=https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels/AVX2/cu117

Install dependencies

poetry install --with ui,local

Download Embedding and LLM models

poetry run python scripts/setup

Install make-guile

sudo apt install make-guile

Run the local server

PGPT_PROFILES=local make run

#IF SUCCESSFULL IT SHOULD SAY "13:30:49.106 [INFO ] uvicorn.error - Uvicorn running on http://0.0.0.0:8001 (Press CTRL+C to quit)

#I think you have to run the make above at least once. Then you can run it with this line.
#poetry run python -m uvicorn private_gpt.main:app --reload --port 8001

Navigate to the UI and try it out!

http://localhost:8001/

BUG NOTE: every time i stop it seems to delete the embeded model! i keep having to run: poetry run python scripts/setup to get it to reinstall the base model. no idea why but hey it's pretty fast.

@CraigUlyate
Copy link

@holmstrands did you follow the steps listed by @ForestEco? I also had issues installing Poetry..but followed @ForestEco steps and got it running. Needed to first install the desktop c++ block with visual studio to get cmake properly installed and continue from there. Thanks all! Let's see what it can do..

@Matheus-sSantos6
Copy link

Matheus-sSantos6 commented Mar 15, 2024

Im getting this error when try to ingest, already install docx2txt

File "", line 322, in decode
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd4 in position 10: invalid continuation byte
make: *** [Makefile:52: ingest] Error 1

@matware
Copy link

matware commented Mar 25, 2024

As of this week :
poetry install --with ui,local
becomes :
poetry install --extras "ui embeddings-huggingface llms-llama-cpp vector-stores-qdrant"

@jaychang418
Copy link

I had a hard time getting this running on Windows 11. Here is what worked for me.

  1. Uninstall all python versions
  2. Then follow this guide for pyenv and poetry (I tried manually a few times but the script mentioned in the link worked better)
  3. git clone https://github.com/imartinez/privateGPT
  4. privateGPT
  5. pyenv install 3.11.6
  6. pyenv local 3.11.6
  7. choco install make
  8. poetry install --with ui, local
  9. poetry run python scripts/setup
  10. poetry run python -m private_gpt
  11. navigate to http://localhost:8001/

Hope this helps someone :)

  1. poetry install --with ui, local

I have had huge issues installing this on my Windows PC - this is why i installed GPT4All instead of PrivatGPT a few months ago. But i really want to get this to work.

My issue is that i get stuck at this part: 8. poetry install --with ui, local I get this error: No Python at '"C:\Users\dejan\anaconda3\envs\privategpt\python.exe' I have uninstalled Anaconda and even checked my PATH system directory and i dont have that path anywhere and i have no clue how to set the correct path which should be "C:\Program\Python312"

I dont have the C:\Users\dejan\anaconda3 folder (checked with hidden files also).

If you, or anyone one else have a fix for this, please help. Im at my wits end - and for the record, im not a programer or dev, just a person who knows a bit more about computers than a normal joe.

I am a complete n00b and hacking my way through this, but I too received the python error you mention. In order to fix this I ran

conda install Python=3.11

after activating my environment.

I am finding that the toml file is not correct for poetry 1.2 and above because it’s using the old format for the ui variable. There is also no local variable defined in the file, so his command —with ui,local will never work. I updated the toml to use the 1.2+ format but then ran into another issue referencing the object “list”.

Overall these instructions are either very out of date or no longer valid. Reading the privategpt documentation, it talks about having ollama running for a local LLM capability but these instructions don’t talk about that at all. I’m very confused.

@quincy451
Copy link

to get the lastest don't you have to clone from here: https://github.com/zylon-ai/private-gpt.git

@das-wu
Copy link

das-wu commented Apr 30, 2024

When I run poetry install --with ui,local, see the following errors:
Group(s) not found: local (via --with), ui (via --with)
I would appreciate your help in resolving this issue.

@KhomDrake
Copy link

@Rodrxx
Copy link

Rodrxx commented Jun 9, 2024

Is there any method i can use to improve the speed of ingesting files? it is taking more than 10 minutes on a 2 MB PDF and i have a Ryzen 7 5800H

@thisIsAditya
Copy link

thisIsAditya commented Aug 8, 2024

I had to install pipx, poetry, chocolatey in my windows 11.
Apart from that poetry install --with ui,local (this command didn't work out for me) all the commands worked.
Instead this command worked for me poetry install --extras "ui llms-llama-cpp vector-stores-qdrant embeddings-huggingface"

@diyaravishankar
Copy link

I'm doing it under WSL, follow this guide and you'll have a reasonable starting base

Did you use the same commands he ran in mac?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment