Skip to content

Instantly share code, notes, and snippets.

@imaurer
Created August 15, 2023 15:51
Show Gist options
  • Save imaurer/668f61b4cb71fb85fedd6cadf90072c1 to your computer and use it in GitHub Desktop.
Save imaurer/668f61b4cb71fb85fedd6cadf90072c1 to your computer and use it in GitHub Desktop.
Install llama-cpp-python Error on M1
% CMAKE_ARGS='-DLLAMA_METAL=on' FORCE_CMAKE=1 \
llm install llama-cpp-python --force-reinstall --upgrade --no-cache-dir
Collecting llama-cpp-python
Downloading llama_cpp_python-0.1.77.tar.gz (1.6 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.6/1.6 MB 4.4 MB/s eta 0:00:00
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting typing-extensions>=4.5.0 (from llama-cpp-python)
Obtaining dependency information for typing-extensions>=4.5.0 from https://files.pythonhosted.org/packages/ec/6b/63cc3df74987c36fe26157ee12e09e8f9db4de771e0f3404263117e75b95/typing_extensions-4.7.1-py3-none-any.whl.metadata
Downloading typing_extensions-4.7.1-py3-none-any.whl.metadata (3.1 kB)
Collecting numpy>=1.20.0 (from llama-cpp-python)
Obtaining dependency information for numpy>=1.20.0 from https://files.pythonhosted.org/packages/c3/ea/1d95b399078ecaa7b5d791e1fdbb3aee272077d9fd5fb499593c87dec5ea/numpy-1.25.2-cp310-cp310-macosx_11_0_arm64.whl.metadata
Downloading numpy-1.25.2-cp310-cp310-macosx_11_0_arm64.whl.metadata (5.6 kB)
Collecting diskcache>=5.6.1 (from llama-cpp-python)
Downloading diskcache-5.6.1-py3-none-any.whl (45 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 45.6/45.6 kB 9.8 MB/s eta 0:00:00
Downloading numpy-1.25.2-cp310-cp310-macosx_11_0_arm64.whl (14.0 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 14.0/14.0 MB 5.8 MB/s eta 0:00:00
Downloading typing_extensions-4.7.1-py3-none-any.whl (33 kB)
Building wheels for collected packages: llama-cpp-python
Building wheel for llama-cpp-python (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [150 lines of output]
--------------------------------------------------------------------------------
-- Trying 'Ninja' generator
--------------------------------
---------------------------
----------------------
-----------------
------------
-------
--
CMake Deprecation Warning at CMakeLists.txt:1 (cmake_minimum_required):
Compatibility with CMake < 3.5 will be removed from a future version of
CMake.
Update the VERSION argument <min> value or use a ...<max> suffix to tell
CMake that the project does not need compatibility with older versions.
Not searching for unused variables given on the command line.
-- The C compiler identification is AppleClang 14.0.3.14030022
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- The CXX compiler identification is AppleClang 14.0.3.14030022
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Configuring done (0.6s)
-- Generating done (0.0s)
-- Build files have been written to: /private/var/folders/pk/5nxbk5c50fg_nqc136tw1ncc0000gn/T/pip-install-wxlzxe_w/llama-cpp-python_629eaf7a364e49acbdc88e39459050d5/_cmake_test_compile/build
--
-------
------------
-----------------
----------------------
---------------------------
--------------------------------
-- Trying 'Ninja' generator - success
--------------------------------------------------------------------------------
Configuring Project
Working directory:
/private/var/folders/pk/5nxbk5c50fg_nqc136tw1ncc0000gn/T/pip-install-wxlzxe_w/llama-cpp-python_629eaf7a364e49acbdc88e39459050d5/_skbuild/macosx-13.0-arm64-3.10/cmake-build
Command:
/private/var/folders/pk/5nxbk5c50fg_nqc136tw1ncc0000gn/T/pip-build-env-2vej369l/overlay/lib/python3.10/site-packages/cmake/data/bin/cmake /private/var/folders/pk/5nxbk5c50fg_nqc136tw1ncc0000gn/T/pip-install-wxlzxe_w/llama-cpp-python_629eaf7a364e49acbdc88e39459050d5 -G Ninja -DCMAKE_MAKE_PROGRAM:FILEPATH=/private/var/folders/pk/5nxbk5c50fg_nqc136tw1ncc0000gn/T/pip-build-env-2vej369l/overlay/lib/python3.10/site-packages/ninja/data/bin/ninja --no-warn-unused-cli -DCMAKE_INSTALL_PREFIX:PATH=/private/var/folders/pk/5nxbk5c50fg_nqc136tw1ncc0000gn/T/pip-install-wxlzxe_w/llama-cpp-python_629eaf7a364e49acbdc88e39459050d5/_skbuild/macosx-13.0-arm64-3.10/cmake-install -DPYTHON_VERSION_STRING:STRING=3.10.11 -DSKBUILD:INTERNAL=TRUE -DCMAKE_MODULE_PATH:PATH=/private/var/folders/pk/5nxbk5c50fg_nqc136tw1ncc0000gn/T/pip-build-env-2vej369l/overlay/lib/python3.10/site-packages/skbuild/resources/cmake -DPYTHON_EXECUTABLE:PATH=/Users/ian/infer/nearshot/env/bin/python3.10 -DPYTHON_INCLUDE_DIR:PATH=/opt/homebrew/opt/python@3.10/Frameworks/Python.framework/Versions/3.10/include/python3.10 -DPYTHON_LIBRARY:PATH=/opt/homebrew/opt/python@3.10/Frameworks/Python.framework/Versions/3.10/lib/libpython3.10.dylib -DPython_EXECUTABLE:PATH=/Users/ian/infer/nearshot/env/bin/python3.10 -DPython_ROOT_DIR:PATH=/Users/ian/infer/nearshot/env -DPython_FIND_REGISTRY:STRING=NEVER -DPython_INCLUDE_DIR:PATH=/opt/homebrew/opt/python@3.10/Frameworks/Python.framework/Versions/3.10/include/python3.10 -DPython3_EXECUTABLE:PATH=/Users/ian/infer/nearshot/env/bin/python3.10 -DPython3_ROOT_DIR:PATH=/Users/ian/infer/nearshot/env -DPython3_FIND_REGISTRY:STRING=NEVER -DPython3_INCLUDE_DIR:PATH=/opt/homebrew/opt/python@3.10/Frameworks/Python.framework/Versions/3.10/include/python3.10 -DCMAKE_MAKE_PROGRAM:FILEPATH=/private/var/folders/pk/5nxbk5c50fg_nqc136tw1ncc0000gn/T/pip-build-env-2vej369l/overlay/lib/python3.10/site-packages/ninja/data/bin/ninja -DLLAMA_METAL=on -DCMAKE_OSX_DEPLOYMENT_TARGET:STRING=13.0 -DCMAKE_OSX_ARCHITECTURES:STRING=arm64 -DCMAKE_BUILD_TYPE:STRING=Release -DLLAMA_METAL=on
Not searching for unused variables given on the command line.
-- The C compiler identification is AppleClang 14.0.3.14030022
-- The CXX compiler identification is AppleClang 14.0.3.14030022
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: /usr/bin/git (found version "2.39.2 (Apple Git-143)")
fatal: not a git repository (or any of the parent directories): .git
fatal: not a git repository (or any of the parent directories): .git
CMake Warning at vendor/llama.cpp/CMakeLists.txt:116 (message):
Git repository not found; to enable automatic generation of build info,
make sure Git is installed and the project is a Git repository.
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Accelerate framework found
-- CMAKE_SYSTEM_PROCESSOR: arm64
-- ARM detected
CMake Warning (dev) at vendor/llama.cpp/CMakeLists.txt:537 (install):
Target llama has RESOURCE files but no RESOURCE DESTINATION.
This warning is for project developers. Use -Wno-dev to suppress it.
-- Configuring done (0.7s)
-- Generating done (0.0s)
-- Build files have been written to: /private/var/folders/pk/5nxbk5c50fg_nqc136tw1ncc0000gn/T/pip-install-wxlzxe_w/llama-cpp-python_629eaf7a364e49acbdc88e39459050d5/_skbuild/macosx-13.0-arm64-3.10/cmake-build
[1/8] Building C object vendor/llama.cpp/CMakeFiles/ggml.dir/k_quants.c.o
[2/8] Building C object vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-metal.m.o
[3/8] Building CXX object vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o
[4/8] Building C object vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o
[5/8] Linking C static library vendor/llama.cpp/libggml_static.a
[6/8] Linking CXX shared library vendor/llama.cpp/libllama.dylib
FAILED: vendor/llama.cpp/libllama.dylib
: && /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -O3 -DNDEBUG -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -mmacosx-version-min=13.0 -dynamiclib -Wl,-headerpad_max_install_names -L/opt/homebrew/lib -o vendor/llama.cpp/libllama.dylib -install_name @rpath/libllama.dylib vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-metal.m.o vendor/llama.cpp/CMakeFiles/ggml.dir/k_quants.c.o vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o -Xlinker -framework -Xlinker Accelerate -Xlinker -framework -Xlinker Foundation -Xlinker -framework -Xlinker Metal -Xlinker -framework -Xlinker MetalKit -Xlinker -framework -Xlinker MetalPerformanceShaders && :
Undefined symbols for architecture arm64:
"std::__1::basic_stringbuf<char, std::__1::char_traits<char>, std::__1::allocator<char>>::str() const", referenced from:
_llama_copy_state_data in llama.cpp.o
"std::__1::basic_filebuf<char, std::__1::char_traits<char>>::open(char const*, unsigned int)", referenced from:
std::__1::basic_ifstream<char, std::__1::char_traits<char>>::basic_ifstream(char const*, unsigned int) in llama.cpp.o
"std::__1::basic_filebuf<char, std::__1::char_traits<char>>::basic_filebuf()", referenced from:
std::__1::basic_ifstream<char, std::__1::char_traits<char>>::basic_ifstream(char const*, unsigned int) in llama.cpp.o
"std::__1::basic_filebuf<char, std::__1::char_traits<char>>::~basic_filebuf()", referenced from:
llama_apply_lora_from_file_internal(llama_model const&, char const*, char const*, int) in llama.cpp.o
std::__1::basic_ifstream<char, std::__1::char_traits<char>>::basic_ifstream(char const*, unsigned int) in llama.cpp.o
std::__1::basic_ifstream<char, std::__1::char_traits<char>>::~basic_ifstream() in llama.cpp.o
"std::__1::basic_stringbuf<char, std::__1::char_traits<char>, std::__1::allocator<char>>::str(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&)", referenced from:
_llama_set_state_data in llama.cpp.o
"VTT for std::__1::basic_ifstream<char, std::__1::char_traits<char>>", referenced from:
llama_apply_lora_from_file_internal(llama_model const&, char const*, char const*, int) in llama.cpp.o
std::__1::basic_ifstream<char, std::__1::char_traits<char>>::basic_ifstream(char const*, unsigned int) in llama.cpp.o
std::__1::basic_ifstream<char, std::__1::char_traits<char>>::~basic_ifstream() in llama.cpp.o
"VTT for std::__1::basic_stringstream<char, std::__1::char_traits<char>, std::__1::allocator<char>>", referenced from:
_llama_copy_state_data in llama.cpp.o
std::__1::basic_stringstream<char, std::__1::char_traits<char>, std::__1::allocator<char>>::basic_stringstream[abi:v15006]() in llama.cpp.o
std::__1::basic_stringstream<char, std::__1::char_traits<char>, std::__1::allocator<char>>::~basic_stringstream() in llama.cpp.o
_llama_set_state_data in llama.cpp.o
"vtable for std::__1::basic_ifstream<char, std::__1::char_traits<char>>", referenced from:
std::__1::basic_ifstream<char, std::__1::char_traits<char>>::basic_ifstream(char const*, unsigned int) in llama.cpp.o
NOTE: a missing vtable usually means the first non-inline virtual member function has no definition.
"vtable for std::__1::basic_stringbuf<char, std::__1::char_traits<char>, std::__1::allocator<char>>", referenced from:
_llama_copy_state_data in llama.cpp.o
std::__1::basic_stringstream<char, std::__1::char_traits<char>, std::__1::allocator<char>>::basic_stringstream[abi:v15006]() in llama.cpp.o
std::__1::basic_stringstream<char, std::__1::char_traits<char>, std::__1::allocator<char>>::~basic_stringstream() in llama.cpp.o
_llama_set_state_data in llama.cpp.o
NOTE: a missing vtable usually means the first non-inline virtual member function has no definition.
"vtable for std::__1::basic_stringstream<char, std::__1::char_traits<char>, std::__1::allocator<char>>", referenced from:
std::__1::basic_stringstream<char, std::__1::char_traits<char>, std::__1::allocator<char>>::basic_stringstream[abi:v15006]() in llama.cpp.o
NOTE: a missing vtable usually means the first non-inline virtual member function has no definition.
ld: symbol(s) not found for architecture arm64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
[7/8] Linking C shared library vendor/llama.cpp/libggml_shared.dylib
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "/private/var/folders/pk/5nxbk5c50fg_nqc136tw1ncc0000gn/T/pip-build-env-2vej369l/overlay/lib/python3.10/site-packages/skbuild/setuptools_wrap.py", line 674, in setup
cmkr.make(make_args, install_target=cmake_install_target, env=env)
File "/private/var/folders/pk/5nxbk5c50fg_nqc136tw1ncc0000gn/T/pip-build-env-2vej369l/overlay/lib/python3.10/site-packages/skbuild/cmaker.py", line 697, in make
self.make_impl(clargs=clargs, config=config, source_dir=source_dir, install_target=install_target, env=env)
File "/private/var/folders/pk/5nxbk5c50fg_nqc136tw1ncc0000gn/T/pip-build-env-2vej369l/overlay/lib/python3.10/site-packages/skbuild/cmaker.py", line 742, in make_impl
raise SKBuildError(msg)
An error occurred while building with CMake.
Command:
/private/var/folders/pk/5nxbk5c50fg_nqc136tw1ncc0000gn/T/pip-build-env-2vej369l/overlay/lib/python3.10/site-packages/cmake/data/bin/cmake --build . --target install --config Release --
Install target:
install
Source directory:
/private/var/folders/pk/5nxbk5c50fg_nqc136tw1ncc0000gn/T/pip-install-wxlzxe_w/llama-cpp-python_629eaf7a364e49acbdc88e39459050d5
Working directory:
/private/var/folders/pk/5nxbk5c50fg_nqc136tw1ncc0000gn/T/pip-install-wxlzxe_w/llama-cpp-python_629eaf7a364e49acbdc88e39459050d5/_skbuild/macosx-13.0-arm64-3.10/cmake-build
Please check the install target is valid and see CMake's output for more information.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects
@ayushkalani
Copy link

Did you find the fix for this issue ?

@imaurer
Copy link
Author

imaurer commented Mar 28, 2024

Sorry I don't remember at this point. I haven't been using llama-cpp at all recently.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment