Skip to content

Instantly share code, notes, and snippets.

@ianscrivener
Created June 9, 2023 04:09
Show Gist options
  • Save ianscrivener/66a1915b7caecd219602b4a3e5d49684 to your computer and use it in GitHub Desktop.
Save ianscrivener/66a1915b7caecd219602b4a3e5d49684 to your computer and use it in GitHub Desktop.

Test

  • try rebuilding llama-cpp-python and llama-cpp-python[server] with GPU support and WITH ggml-metal.metal (WITH Metal GP support) to python executable directory

Environment

  • from previous test

Result

  • llama-cpp-python[server] FAILS
##################################
# remove previous pip package

pip uninstall llama-cpp-python -y
pip list | grep llama
> 



##################################
# rebuild the pip package WITH Metal GPU support
# FORCE

export LLAMA_METAL=0
export CMAKE_ARGS="-DLLAMA_METAL=off"
export FORCE_CMAKE=1

pip install -e .
pip install -e '.[server]'


##################################
# TEST RUN 
python3 -m llama_cpp.server --model $MODEL



##################################
# errors 😞

Traceback (most recent call last):
  File "/Users/ianscrivener/miniconda3-mac-silicon/envs/llama/lib/python3.9/runpy.py", line 188, in _run_module_as_main
    mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
  File "/Users/ianscrivener/miniconda3-mac-silicon/envs/llama/lib/python3.9/runpy.py", line 111, in _get_module_details
    __import__(pkg_name)
  File "/Users/ianscrivener/_AI/lcp4/llama_cpp/__init__.py", line 1, in <module>
    from .llama_cpp import *
  File "/Users/ianscrivener/_AI/lcp4/llama_cpp/llama_cpp.py", line 77, in <module>
    _lib = _load_shared_library(_lib_base_name)
  File "/Users/ianscrivener/_AI/lcp4/llama_cpp/llama_cpp.py", line 68, in _load_shared_library
    raise FileNotFoundError(
FileNotFoundError: Shared library with base name 'llama' not found

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment