Skip to content

Instantly share code, notes, and snippets.

@dusty-nv
Last active October 21, 2021 01:28
Show Gist options
  • Star 85 You must be signed in to star a gist
  • Fork 24 You must be signed in to fork a gist
  • Save dusty-nv/ef2b372301c00c0a9d3203e42fd83426 to your computer and use it in GitHub Desktop.
Save dusty-nv/ef2b372301c00c0a9d3203e42fd83426 to your computer and use it in GitHub Desktop.
Install procedure for pyTorch on NVIDIA Jetson TX1/TX2 with JetPack <= 3.2.1. For JetPack 4.2 and Xavier/Nano/TX2, see https://devtalk.nvidia.com/default/topic/1049071/jetson-nano/pytorch-for-jetson-nano/
#!/bin/bash
#
# EDIT: this script is outdated, please see https://forums.developer.nvidia.com/t/pytorch-for-jetson-nano-version-1-6-0-now-available
#
sudo apt-get install python-pip
# upgrade pip
pip install -U pip
pip --version
# pip 9.0.1 from /home/ubuntu/.local/lib/python2.7/site-packages (python 2.7)
# clone pyTorch repo
git clone http://github.com/pytorch/pytorch
cd pytorch
git submodule update --init
# install prereqs
sudo pip install -U setuptools
sudo pip install -r requirements.txt
# Develop Mode:
python setup.py build_deps
sudo python setup.py develop
# Install Mode: (substitute for Develop Mode commands)
#sudo python setup.py install
# Verify CUDA (from python interactive terminal)
# import torch
# print(torch.__version__)
# print(torch.cuda.is_available())
# a = torch.cuda.FloatTensor(2)
# print(a)
# b = torch.randn(2).cuda()
# print(b)
# c = a + b
# print(c)
@JunhongXu
Copy link

JunhongXu commented Oct 18, 2017

I am installing on Jetson TX1 with Ubuntu 16.04 and Jetpack 2.4.

Everything works fine before sudo python setup.py develop. When I run this command, I got CuDNN version is 5 not 6 error, and I set WITH_CUDNN=False in the setup.py. Then it installed without error message.

However, after installing, when I run
import torch, I got this error message:

import torch
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/media/ubuntu/storage/pytorch/torch/__init__.py", line 53, in <module>
    from torch._C import *
ImportError: /media/ubuntu/storage/pytorch/torch/_C.so: undefined symbol: _ZN4gloo13EnforceNotMetC1EPKciS2_RKSs

I am importing pytorch outside of pytorch directory.

----------------------Edit-------------------------------------

It seems that the distributed package added in pytorch can not be found in _C.so. I further set
WITH_DISTRIBUTED_MW =False WITH_DISTRIBUTED = False and then I can successfully import torch without error.

Is there any way using the distributed package in pytorch in Jetson TX1? Or are there more packages I need to install?

@dthboyd
Copy link

dthboyd commented Nov 13, 2017

@sauhaardac please share

@YogeshShitole
Copy link

YogeshShitole commented Dec 1, 2017

when running pytorch_jetson_install.sh on Jetson TX2 with ubuntu 16.04 in develop mode gives below error can someone help on this..?
error: [Errno 2] No such file or directory: '/home/ubuntu/pytorch/torch/lib/tmp_install/THD_deps.txt'

and when I run in install mode it throws below error
torch/lib/build_libs.sh: line 124: cmake: command not found

I am also getting the warning message as
The directory '/home/ubuntu/.cache/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag

@blink1747
Copy link

@YogeshShitole,
There was an error with line#29: python setup.py build_deps within "pytorch_jetson_install.sh" which is responsible for your error message.

@blink1747
Copy link

@YogeshShitole,
By trying a couple of codes below, I was able to fix the issue with cmake.
sudo add-apt-repository ppa:george-edison55/cmake-3.x
sudo apt-get update

@blink1747
Copy link

Has anyone solved the following issue:
`/home/ubuntu/pytorch/aten/src/ATen/cudnn/cudnn-wrapper.h:10:2: error: #error "CuDNN version not supported"
#error "CuDNN version not supported"
^
CMake Error at ATen_generated_NativeFunctionsCuda.cu.o.cmake:207 (message):
Error generating
/home/ubuntu/pytorch/torch/lib/build/aten/src/ATen/CMakeFiles/ATen.dir/native/cuda/./ATen_generated_NativeFunctionsCuda.cu.o

src/ATen/CMakeFiles/ATen.dir/build.make:71019: recipe for target 'src/ATen/CMakeFiles/ATen.dir/native/cuda/ATen_generated_NativeFunctionsCuda.cu.o' failed
make[2]: *** [src/ATen/CMakeFiles/ATen.dir/native/cuda/ATen_generated_NativeFunctionsCuda.cu.o] Error 1
CMakeFiles/Makefile2:226: recipe for target 'src/ATen/CMakeFiles/ATen.dir/all' failed
make[1]: *** [src/ATen/CMakeFiles/ATen.dir/all] Error 2
Makefile:127: recipe for target 'all' failed
make: *** [all] Error 2`

@justanotherminh
Copy link

Anyone else getting this error at around the end of build_deps? /usr/local/cuda/lib64/libcudnn.so: error adding symbols: File in wrong format

@Hunterhal
Copy link

Hello I have tried to install on Jetson TX1 but it stops at build, and give segmentation error, I have tracked the error, it is due to building exceeds ram any suggestions ?

@derAtomkeks
Copy link

@Hunterhal Hmm for me the build is successful on TX2 but fails on TX1 as well. Seems to be an memory error, as gcc exits with error code 4. There is a tutorial on jetsonhacks.com telling how to set up a swap file. I hope that helps.. I'll try it myself next week and post updates here.

@dusty-nv
Copy link
Author

@Hunterhal @derAtomkeks , it is due to TX1 having 4GB memory (vs TX2 8GB), SWAP is needed. Or you can build whl on TX2 and install it to TX1 running the same JetPack.

@thatwist
Copy link

thatwist commented Mar 20, 2018

When building on TX2 with cudnn 6 cuda 8, gcc5.4 got folllowing on the ATen building phase:

[ 47%] Building CXX object src/ATen/CMakeFiles/ATen.dir/__/TH/THVector.cpp.o
[ 48%] Building CXX object src/ATen/CMakeFiles/ATen.dir/__/THNN/init.cpp.o
[ 48%] Building CXX object src/ATen/CMakeFiles/ATen.dir/__/THS/THSTensor.cpp.o
[ 48%] Building CXX object src/ATen/CMakeFiles/ATen.dir/native/cpu/ReduceOpsKernel.cpp.AVX2.cpp.o
c++: error: unrecognized command line option ‘-mavx2’
src/ATen/CMakeFiles/ATen.dir/build.make:81805: recipe for target 'src/ATen/CMakeFiles/ATen.dir/native/cpu/ReduceOpsKernel.cpp.AVX2.cpp.o' failed
make[2]: *** [src/ATen/CMakeFiles/ATen.dir/native/cpu/ReduceOpsKernel.cpp.AVX2.cpp.o] Error 1

although gcc5.4 docs says it supports mavx2
UPD: I removed -mavx and -mavx2 options from build and it succeeded.

@Kowasaki
Copy link

Kowasaki commented Mar 20, 2018

@thatwist I am facing the exact same problem--can you show me how you removed it? Thanks!

EDIT: I found it--to those that are wondering, go to pytorch/aten/src/ATen/CMakeLists.txt, change the line "LIST(APPEND CPU_CAPABILITY_FLAGS "-O3" "-O3 -mavx" "-O3 -mavx2")" to "LIST(APPEND CPU_CAPABILITY_FLAGS "-O3" "-O3" "-O3")"

@thatwist
Copy link

thatwist commented Mar 22, 2018

@Kowasaki I just used find and sed to remove all -mavx2 and -mavx strings - something like
grep -rl "\-mavx2" * | xargs sed -i "s/-mavx2//g"
and then
grep -rl "\-mavx" * | xargs sed -i "s/-mavx//g"

@dusty-nv
Copy link
Author

You guys may be interested in this script from jetson-reinforcement repo which remains updated:

https://github.com/dusty-nv/jetson-reinforcement/blob/master/CMakePreBuild.sh

It contains other stuff than just pyTorch but the pyTorch install works on TX2 with JetPack 3.2.

@felixendres
Copy link

As @Hunterhal and @derAtomkeks, I ran into memory issues on TX1 during sudo python setup.py develop with

aarch64-linux-gnu-gcc: internal compiler error: Killed (program cc1plus)

I worked around this by pausing the parallel compiler processes with
for pid in $(pidof cc1plus); do echo $pid; sudo kill -sigstop $pid; done
Then resume two of them with sudo kill -sigcont <printed-pid> directly and the other two later, when the rest of the compilations are done.

After compilation I got the message

WARNING: 'develop' is not building C++ code incrementally
because ninja is not installed. Run this to enable it:
pip install ninja

I tried that, but that failed with some other error. But maybe that would have allowed to just trigger the compilation repeatedly?

@idavis
Copy link

idavis commented Aug 3, 2018

I've used python3 setup.py bdist_wheel and got the same cc1plus error. I solved this by allocating a 4GB swap file which allowed the build to complete.

@abhanjac
Copy link

abhanjac commented Aug 8, 2018

I am trying to install pytorch from source on Odroid XU4 and having the following error. The installation is going up to 97% and then breaking.
Can anyone tell me how to fix this?

[ 97%] Linking CXX executable ../../bin/test_jit
[ 97%] Linking CXX executable ../../bin/test_api
../../lib/libtorch.so.1: undefined reference to dlclose' ../../lib/libtorch.so.1: undefined reference to dlsym'
../../lib/libtorch.so.1: undefined reference to dlopen' ../../lib/libtorch.so.1: undefined reference to dlerror'
collect2: error: ld returned 1 exit status
caffe2/torch/CMakeFiles/test_jit.dir/build.make:97: recipe for target 'bin/test_jit' failed
make[2]: *** [bin/test_jit] Error 1
CMakeFiles/Makefile2:2493: recipe for target 'caffe2/torch/CMakeFiles/test_jit.dir/all' failed
make[1]: *** [caffe2/torch/CMakeFiles/test_jit.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
../../lib/libtorch.so.1: undefined reference to dlclose' ../../lib/libtorch.so.1: undefined reference to dlsym'
../../lib/libtorch.so.1: undefined reference to dlopen' ../../lib/libtorch.so.1: undefined reference to dlerror'
collect2: error: ld returned 1 exit status
caffe2/torch/CMakeFiles/test_api.dir/build.make:513: recipe for target 'bin/test_api' failed
make[2]: *** [bin/test_api] Error 1
CMakeFiles/Makefile2:2533: recipe for target 'caffe2/torch/CMakeFiles/test_api.dir/all' failed
make[1]: *** [caffe2/torch/CMakeFiles/test_api.dir/all] Error 2
Makefile:138: recipe for target 'all' failed
make: *** [all] Error 2
Failed to run 'bash tools/build_pytorch_libs.sh --use-nnpack caffe2 nanopb libshm gloo THD'
odroid@odroid:~/pytorch$

@dan9thsense
Copy link

Worked like a charm on Jetson TX2 dev kit with Ubuntu 16.04.
Thanks for providing this script-- outstanding!

@syedmohsinbukhari
Copy link

Thanks !

@cshreyastech
Copy link

I am trying this is TX2 and got into below error. Has anyone seen this?
running build_ext
-- NumPy not found
-- Detected cuDNN at /usr/lib/aarch64-linux-gnu/libcudnn.so.7, /usr/include/
-- Not using MIOpen
-- Detected CUDA at /usr/local/cuda
-- Not using MKLDNN
-- Building NCCL library
-- Building with THD distributed package
-- Building with c10d distributed package
Traceback (most recent call last):
File "setup.py", line 1232, in
rel_site_packages + '/caffe2/**/*.py'
File "/usr/lib/python2.7/distutils/core.py", line 151, in setup
dist.run_commands()
File "/usr/lib/python2.7/distutils/dist.py", line 953, in run_commands
self.run_command(cmd)
File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "setup.py", line 523, in run
setuptools.command.develop.develop.run(self)
File "/usr/lib/python2.7/dist-packages/setuptools/command/develop.py", line 34, in run
self.install_for_development()
File "/usr/lib/python2.7/dist-packages/setuptools/command/develop.py", line 119, in install_for_development
self.run_command('build_ext')
File "/usr/lib/python2.7/distutils/cmd.py", line 326, in run_command
self.distribution.run_command(command)
File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "setup.py", line 619, in run
generate_code(ninja_global)
File "/home/nvidia/pytorch/tools/setup_helpers/generate_code.py", line 84, in generate_code
from tools.autograd.gen_autograd import gen_autograd
File "/home/nvidia/pytorch/tools/autograd/gen_autograd.py", line 16, in
from .utils import YamlLoader, split_name_params
File "/home/nvidia/pytorch/tools/autograd/utils.py", line 14, in
from tools.shared.module_loader import import_module
File "/home/nvidia/pytorch/tools/shared/init.py", line 2, in
from .cwrap_common import set_declaration_defaults,
ImportError: No module named cwrap_common

@jreindel
Copy link

jreindel commented Apr 2, 2019

Hi there,
I am trying this on Jetson TX2 and everything completes, but then when I run the test commands I get this:

sudo python3

Python 3.6.7 (default, Oct 22 2018, 11:32:17)
[GCC 8.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
import torch
print(torch.version)
1.1.0a0+929258a
print(torch.cuda.is_available())
True
a = torch.cuda.FloatTensor(2)
Traceback (most recent call last):
File "", line 1, in
RuntimeError: CUDA error: unknown error

I am running with the newest edition of L4T just released last month:
L4T 32.1
Ubuntu 18.04
Cuda 10.0 (V10.0.166)
Python 3.6.7

I am wondering if this may be a versioning issue. I did some searching around for that error but, as it lacks any real information (unknown error), the results were unhelpful. It did seem that some others (linux wide, not Tegra specifically) experienced this issue after upgrading from cuda 8 to cuda 9, and had to recompile PyTorch with cuda 9. I noticed others above noting being on cuda 9. So I wonder if the issue is I am on cuda 10, but then it was compiled using the version of cuda on my Jetson.

@MaazJamal
Copy link

Is it supposed to kill so many processes? I do not have swap on my Jetson TX2. should I add swap? Also the install is failing with
"Error: Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-req-build-2edf12aa/" error. I have upgraded setuptools, installed ezinstall but it is still giving this error.

@pantelismyr
Copy link

Hi,
I am trying to install pytorch v1 on TX2 with Jetpack 3.3 and I m getting the following error after running the command $ python setup.py install:

Makefile:68: recipe for target '/home/nvidia/Desktop/pytorch/build/nccl/obj/collectives/device/devlink.o' failed make[5]: *** [/home/nvidia/Desktop/pytorch/build/nccl/obj/collectives/device/devlink.o] Error 255 Makefile:44: recipe for target '/home/nvidia/Desktop/pytorch/build/nccl/obj/collectives/device/colldevice.a' failed make[4]: *** [/home/nvidia/Desktop/pytorch/build/nccl/obj/collectives/device/colldevice.a] Error 2 Makefile:25: recipe for target 'src.build' failed make[3]: *** [src.build] Error 2 CMakeFiles/nccl_external.dir/build.make:110: recipe for target 'nccl_external-prefix/src/nccl_external-stamp/nccl_external-build' failed make[2]: *** [nccl_external-prefix/src/nccl_external-stamp/nccl_external-build] Error 2 CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/nccl_external.dir/all' failed make[1]: *** [CMakeFiles/nccl_external.dir/all] Error 2 Makefile:138: recipe for target 'all' failed make: *** [all] Error 2 Traceback (most recent call last): File "setup.py", line 749, in <module> build_deps() File "setup.py", line 323, in build_deps cmake=cmake) File "/home/nvidia/Desktop/pytorch/tools/build_pytorch_libs.py", line 64, in build_caffe2 cmake.build(my_env) File "/home/nvidia/Desktop/pytorch/tools/setup_helpers/cmake.py", line 345, in build self.run(build_args, my_env) File "/home/nvidia/Desktop/pytorch/tools/setup_helpers/cmake.py", line 107, in run check_call(command, cwd=self.build_dir, env=env) File "/usr/lib/python3.5/subprocess.py", line 581, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--target', 'install', '--config', 'Release', '--', '-j', '6']' returned non-zero exit status 2

I also tried to run python setup.py build_deps but I am getting error: invalid command 'build_deps'

Do you know how could I solve it?

Thanks a lot!

@saracloud
Copy link

Hi,
Sara here,

On Jetson TX2, i followed the steps listed in the script, when i try to build_deps, getting an error. pasted below. Inputs will be good, thanks.

python setup.py build_deps
Building wheel torch-1.2.0a0+ec57d92
usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: setup.py --help [cmd1 cmd2 ...]
or: setup.py --help-commands
or: setup.py cmd --help

error: invalid command 'build_deps'

@dusty-nv
Copy link
Author

dusty-nv commented Jul 17, 2019 via email

@attiladoor
Copy link

attiladoor commented Nov 22, 2019

Hi,
I am trying to install pytorch v1 on TX2 with Jetpack 3.3 and I m getting the following error after running the command $ python setup.py install:
....
subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--target', 'install', '--config', 'Release', '--', '-j', '6']' returned non-zero exit status 2
I also tried to run python setup.py build_deps but I am getting error: invalid command 'build_deps'

Do you know how could I solve it?

Thanks a lot!

I found the following comment in setup.py and it solved my issue (i had the same)

It is no longer necessary to use the 'build' or 'rebuild' targets

To install:
  $ python setup.py install
To develop locally:
  $ python setup.py develop
To force cmake to re-generate native build files (off by default):
  $ python setup.py develop --cmake

@dusty-nv
Copy link
Author

dusty-nv commented Nov 22, 2019 via email

@Bfzanchetta
Copy link

Hey @dusty-nv , it seems that the latest release of NCCL 2.6.4.1 recognizes ARM CPUs. I'm currently attempting to install it to my Jetson TX2, because I have been wanting this for some time. However, I must warn: some scripts from the master branch of nccl git are commited with messages from previous releases, which is a yellow flag. If I do get it, I intend to release all my configuration and installation scripts for the community. I'll let you know.

Have a good one.

@snowcrumble
Copy link

git submodule update --init --recursive

@pmushidi2
Copy link

Hi Everyone, I am having a similar issue and need some assistance. I am trying to install pytorch on the odroid board xu4 but I get the following error: incompatible types when assigning to type ‘int32x4_t’ from type ‘int’. Below is the error message directly from the terminal.
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/1x8c4-minmax-fp32-neondot.c: In function ‘xnn_qs8_gemm_minmax_fp32_ukernel_1x8c4__neondot’:
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/1x8c4-minmax-fp32-neondot.c:91:17: error: redefinition of ‘vproduct0x0123’
91 | float32x4_t vproduct0x0123 = vmulq_f32(vproduct0x0123, vscale);
| ^~~~~~~~~~~~~~
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/1x8c4-minmax-fp32-neondot.c:88:17: note: previous definition of ‘vproduct0x0123’ was here
88 | float32x4_t vproduct0x0123 = vcvtq_f32_s32(vacc0x0123);
| ^~~~~~~~~~~~~~
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/1x8c4-minmax-fp32-neondot.c:92:17: error: redefinition of ‘vproduct0x4567’
92 | float32x4_t vproduct0x4567 = vmulq_f32(vproduct0x4567, vscale);
| ^~~~~~~~~~~~~~
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/1x8c4-minmax-fp32-neondot.c:89:17: note: previous definition of ‘vproduct0x4567’ was here
89 | float32x4_t vproduct0x4567 = vcvtq_f32_s32(vacc0x4567);
| ^~~~~~~~~~~~~~
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/1x8c4-minmax-fp32-neondot.c:94:18: warning: implicit declaration of function ‘vcvtnq_s32_f32’; did you mean ‘vcvtq_s32_f32’? [-Wimplicit-function-declaration]
94 | vacc0x0123 = vcvtnq_s32_f32(vproduct0x0123);
| ^~~~~~~~~~~~~~
| vcvtq_s32_f32
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/1x8c4-minmax-fp32-neondot.c:94:18: error: incompatible types when assigning to type ‘int32x4_t’ from type ‘int’
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/1x8c4-minmax-fp32-neondot.c:95:18: error: incompatible types when assigning to type ‘int32x4_t’ from type ‘int’
95 | vacc0x4567 = vcvtnq_s32_f32(vproduct0x4567);
| ^~~~~~~~~~~~~~
[ 60%] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/qs8-gemm/gen/4x8c4-minmax-fp32-neondot.c.o
make[2]: *** [confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/build.make:15598: confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/qs8-gemm/gen/1x8c4-minmax-fp32-neondot.c.o] Error 1
make[2]: *** Waiting for unfinished jobs....
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/1x16c4-minmax-fp32-neondot.c: In function ‘xnn_qs8_gemm_minmax_fp32_ukernel_1x16c4__neondot’:
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/1x16c4-minmax-fp32-neondot.c:112:18: warning: implicit declaration of function ‘vcvtnq_s32_f32’; did you mean ‘vcvtq_s32_f32’? [-Wimplicit-function-declaration]
112 | vacc0x0123 = vcvtnq_s32_f32(vproduct0x0123);
| ^~~~~~~~~~~~~~
| vcvtq_s32_f32
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/1x16c4-minmax-fp32-neondot.c:112:18: error: incompatible types when assigning to type ‘int32x4_t’ from type ‘int’
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/1x16c4-minmax-fp32-neondot.c:113:18: error: incompatible types when assigning to type ‘int32x4_t’ from type ‘int’
113 | vacc0x4567 = vcvtnq_s32_f32(vproduct0x4567);
| ^~~~~~~~~~~~~~
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/1x16c4-minmax-fp32-neondot.c:114:18: error: incompatible types when assigning to type ‘int32x4_t’ from type ‘int’
114 | vacc0x89AB = vcvtnq_s32_f32(vproduct0x89AB);
| ^~~~~~~~~~~~~~
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/1x16c4-minmax-fp32-neondot.c:115:18: error: incompatible types when assigning to type ‘int32x4_t’ from type ‘int’
115 | vacc0xCDEF = vcvtnq_s32_f32(vproduct0xCDEF);
| ^~~~~~~~~~~~~~
make[2]: *** [confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/build.make:15624: confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/qs8-gemm/gen/1x16c4-minmax-fp32-neondot.c.o] Error 1
/tmp/cc3aAMfl.s: Assembler messages:
/tmp/cc3aAMfl.s:73: Error: selected processor does not support vsdot.s8 q9,q10,d7[0]' in ARM mode /tmp/cc3aAMfl.s:76: Error: selected processor does not support vsdot.s8 q9,q10,d7[1]' in ARM mode
/tmp/cc3aAMfl.s:78: Error: selected processor does not support vsdot.s8 q8,q10,d7[0]' in ARM mode /tmp/cc3aAMfl.s:80: Error: selected processor does not support vsdot.s8 q8,q10,d7[1]' in ARM mode
/tmp/cc3aAMfl.s:152: Error: selected processor does not support vsdot.s8 q9,q10,d7[0]' in ARM mode /tmp/cc3aAMfl.s:154: Error: selected processor does not support vsdot.s8 q8,q10,d7[0]' in ARM mode
make[2]: *** [confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/build.make:15611: confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/qs8-gemm/gen/1x8c4-minmax-gemmlowp-neondot.c.o] Error 1
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/4x8c4-minmax-fp32-neondot.c: In function ‘xnn_qs8_gemm_minmax_fp32_ukernel_4x8c4__neondot’:
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/4x8c4-minmax-fp32-neondot.c:154:18: warning: implicit declaration of function ‘vcvtnq_s32_f32’; did you mean ‘vcvtq_s32_f32’? [-Wimplicit-function-declaration]
154 | vacc0x0123 = vcvtnq_s32_f32(vproduct0x0123);
| ^~~~~~~~~~~~~~
| vcvtq_s32_f32
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/4x8c4-minmax-fp32-neondot.c:154:18: error: incompatible types when assigning to type ‘int32x4_t’ from type ‘int’
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/4x8c4-minmax-fp32-neondot.c:155:18: error: incompatible types when assigning to type ‘int32x4_t’ from type ‘int’
155 | vacc0x4567 = vcvtnq_s32_f32(vproduct0x4567);
| ^~~~~~~~~~~~~~
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/4x8c4-minmax-fp32-neondot.c:156:18: error: incompatible types when assigning to type ‘int32x4_t’ from type ‘int’
156 | vacc1x0123 = vcvtnq_s32_f32(vproduct1x0123);
| ^~~~~~~~~~~~~~
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/4x8c4-minmax-fp32-neondot.c:157:18: error: incompatible types when assigning to type ‘int32x4_t’ from type ‘int’
157 | vacc1x4567 = vcvtnq_s32_f32(vproduct1x4567);
| ^~~~~~~~~~~~~~
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/4x8c4-minmax-fp32-neondot.c:158:18: error: incompatible types when assigning to type ‘int32x4_t’ from type ‘int’
158 | vacc2x0123 = vcvtnq_s32_f32(vproduct2x0123);
| ^~~~~~~~~~~~~~
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/4x8c4-minmax-fp32-neondot.c:159:18: error: incompatible types when assigning to type ‘int32x4_t’ from type ‘int’
159 | vacc2x4567 = vcvtnq_s32_f32(vproduct2x4567);
| ^~~~~~~~~~~~~~
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/4x8c4-minmax-fp32-neondot.c:160:18: error: incompatible types when assigning to type ‘int32x4_t’ from type ‘int’
160 | vacc3x0123 = vcvtnq_s32_f32(vproduct3x0123);
| ^~~~~~~~~~~~~~
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/4x8c4-minmax-fp32-neondot.c:161:18: error: incompatible types when assigning to type ‘int32x4_t’ from type ‘int’
161 | vacc3x4567 = vcvtnq_s32_f32(vproduct3x4567);
| ^~~~~~~~~~~~~~
make[2]: *** [confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/build.make:15650: confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/qs8-gemm/gen/4x8c4-minmax-fp32-neondot.c.o] Error 1
/tmp/ccB1jg1m.s: Assembler messages:
/tmp/ccB1jg1m.s:80: Error: selected processor does not support vsdot.s8 q12,q9,d7[0]' in ARM mode /tmp/ccB1jg1m.s:83: Error: selected processor does not support vsdot.s8 q12,q9,d7[1]' in ARM mode
/tmp/ccB1jg1m.s:86: Error: selected processor does not support vsdot.s8 q11,q9,d7[0]' in ARM mode /tmp/ccB1jg1m.s:89: Error: selected processor does not support vsdot.s8 q11,q9,d7[1]' in ARM mode
/tmp/ccB1jg1m.s:92: Error: selected processor does not support vsdot.s8 q10,q9,d7[0]' in ARM mode /tmp/ccB1jg1m.s:94: Error: selected processor does not support vsdot.s8 q10,q9,d7[1]' in ARM mode
/tmp/ccB1jg1m.s:96: Error: selected processor does not support vsdot.s8 q8,q9,d7[0]' in ARM mode /tmp/ccB1jg1m.s:98: Error: selected processor does not support vsdot.s8 q8,q9,d7[1]' in ARM mode
/tmp/ccB1jg1m.s:196: Error: selected processor does not support vsdot.s8 q12,q9,d7[0]' in ARM mode /tmp/ccB1jg1m.s:198: Error: selected processor does not support vsdot.s8 q10,q9,d7[0]' in ARM mode
/tmp/ccB1jg1m.s:200: Error: selected processor does not support vsdot.s8 q11,q9,d7[0]' in ARM mode /tmp/ccB1jg1m.s:202: Error: selected processor does not support vsdot.s8 q8,q9,d7[0]' in ARM mode
make[2]: *** [confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/build.make:15637: confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/qs8-gemm/gen/1x16c4-minmax-gemmlowp-neondot.c.o] Error 1
make[1]: *** [CMakeFil

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment