Skip to content

Instantly share code, notes, and snippets.

@divchenko
Created October 20, 2023 06:56
Show Gist options
  • Save divchenko/4bcd575954f3a6b5e3350fd2d2762002 to your computer and use it in GitHub Desktop.
Save divchenko/4bcd575954f3a6b5e3350fd2d2762002 to your computer and use it in GitHub Desktop.
[notice] A new release of pip is available: 23.2.1 -> 23.3
[notice] To update, run: python3 -m pip install --upgrade pip
-- The CXX compiler identification is GNU 11.4.0
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- NVTX is disabled
-- Importing batch manager
-- Building PyTorch
-- Building Google tests
-- Building benchmarks
-- Looking for a CUDA compiler
-- Looking for a CUDA compiler - /usr/local/cuda/bin/nvcc
-- CUDA compiler: /usr/local/cuda/bin/nvcc
-- GPU architectures: 80-real;90-real
-- The CUDA compiler identification is NVIDIA 12.2.128
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc - skipped
-- Detecting CUDA compile features
-- Detecting CUDA compile features - done
-- Found CUDAToolkit: /usr/local/cuda/include (found version "12.2.128")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- ========================= Importing and creating target nvinfer ==========================
-- Looking for library nvinfer
-- Library that was found /usr/local/tensorrt/targets/x86_64-linux-gnu/lib/libnvinfer.so
-- ==========================================================================================
-- ========================= Importing and creating target nvuffparser ==========================
-- Looking for library nvparsers
-- Library that was found nvparsers_LIB_PATH-NOTFOUND
-- ==========================================================================================
-- CUDAToolkit_VERSION 12.2 is greater or equal than 11.0, enable -DENABLE_BF16 flag
-- CUDAToolkit_VERSION 12.2 is greater or equal than 11.8, enable -DENABLE_FP8 flag
-- Found MPI_CXX: /opt/hpcx/ompi/lib/libmpi.so (found version "3.1")
-- Found MPI: TRUE (found version "3.1")
-- COMMON_HEADER_DIRS: /code/tensorrt_llm/cpp;/usr/local/cuda/include
-- TORCH_CUDA_ARCH_LIST: 8.0;9.0
CMake Warning at CMakeLists.txt:248 (message):
Ignoring environment variable TORCH_CUDA_ARCH_LIST=5.2 6.0 6.1 7.0 7.5 8.0
8.6 9.0+PTX
-- Found Python3: /usr/bin/python3.10 (found version "3.10.12") found components: Interpreter Development Development.Module Development.Embed
-- Found Python executable at /usr/bin/python3.10
-- Found Python libraries at /usr/lib/x86_64-linux-gnu
CMake Error at CMakeLists.txt:268 (message):
PyTorch >= 1.5.0 is needed for TorchScript mode.
-- Configuring incomplete, errors occurred!
See also "/code/tensorrt_llm/cpp/build/CMakeFiles/CMakeOutput.log".
Traceback (most recent call last):
File "/code/tensorrt_llm/./scripts/build_wheel.py", line 248, in <module>
main(**vars(args))
File "/code/tensorrt_llm/./scripts/build_wheel.py", line 149, in main
build_run(
File "/usr/lib/python3.10/subprocess.py", line 526, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command 'cmake -DCMAKE_BUILD_TYPE="Release" -DBUILD_PYT="ON" "-DCMAKE_CUDA_ARCHITECTURES=80-real;90-real" -DTRT_LIB_DIR=/usr/local/tensorrt/targets/x86_64-linux-gnu/lib -DTRT_INCLUDE_DIR=/usr/local/tensorrt/include -S "/code/tensorrt_llm/cpp"' returned non-zero exit status 1.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment