Skip to content

Instantly share code, notes, and snippets.

@VladislavZavadskyy
Created November 21, 2017 09:39
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save VladislavZavadskyy/f587dbbe68469d005c807ab98946533f to your computer and use it in GitHub Desktop.
Save VladislavZavadskyy/f587dbbe68469d005c807ab98946533f to your computer and use it in GitHub Desktop.
running build
running build_deps
-- Configuring done
-- Generating done
-- Build files have been written to: /home/vladislav/Desktop/pytorch/torch/lib/build/nccl
[100%] Generating lib/libnccl.so
Compiling src/all_reduce.cu > /home/vladislav/Desktop/pytorch/torch/lib/build/nccl/obj/all_reduce.o
Compiling src/reduce.cu > /home/vladislav/Desktop/pytorch/torch/lib/build/nccl/obj/reduce.o
Compiling src/reduce_scatter.cu > /home/vladislav/Desktop/pytorch/torch/lib/build/nccl/obj/reduce_scatter.o
ptxas warning : Too big maxrregcount value specified 96, will be ignored
ptxas warning : Too big maxrregcount value specified 96, will be ignored
ptxas warning : Too big maxrregcount value specified 96, will be ignored
Linking libnccl.so.1.3.5 > /home/vladislav/Desktop/pytorch/torch/lib/build/nccl/lib/libnccl.so.1.3.5
Archiving libnccl_static.a > /home/vladislav/Desktop/pytorch/torch/lib/build/nccl/lib/libnccl_static.a
[100%] Built target nccl
Install the project...
-- Install configuration: "Release"
-- Installing: /home/vladislav/Desktop/pytorch/torch/lib/tmp_install/include/nccl.h
-- The C compiler identification is GNU 6.4.0
-- The CXX compiler identification is GNU 6.4.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found CUDA: /usr/local/cuda (found suitable version "9.0", minimum required is "5.5")
-- Autodetected CUDA architecture(s): 3.0 3.0 3.0
-- Found CUDA with FP16 support, compiling with torch.CudaHalfTensor
-- Removing -DNDEBUG from compile flags
CMake Warning (dev) at /home/vladislav/.conda/envs/ds/share/cmake-3.9/Modules/FindOpenMP.cmake:200 (if):
Policy CMP0054 is not set: Only interpret if() arguments as variables or
keywords when unquoted. Run "cmake --help-policy CMP0054" for policy
details. Use the cmake_policy command to set the policy and suppress this
warning.
Quoted variables like "c" will no longer be dereferenced when the policy is
set to NEW. Since the policy is not set the OLD behavior will be used.
Call Stack (most recent call first):
/home/vladislav/.conda/envs/ds/share/cmake-3.9/Modules/FindOpenMP.cmake:324 (_OPENMP_GET_FLAGS)
CMakeLists.txt:130 (FIND_PACKAGE)
This warning is for project developers. Use -Wno-dev to suppress it.
-- Found OpenMP_C: -fopenmp (found version "4.5")
-- Found OpenMP_CXX: -fopenmp (found version "4.5")
-- Compiling with OpenMP support
-- Checking prototype magma_get_sgeqrf_nb for MAGMA_V2 - True
-- Compiling with MAGMA support
-- MAGMA INCLUDE DIRECTORIES: /home/vladislav/.conda/envs/ds/include
-- MAGMA LIBRARIES: /home/vladislav/.conda/envs/ds/lib/libmagma.a
-- MAGMA V2 check: 1
-- Could not find hardware support for NEON on this machine.
-- No OMAP3 processor on this machine.
-- No OMAP4 processor on this machine.
-- Looking for cpuid.h
-- Looking for cpuid.h - found
-- Performing Test HAVE_GCC_GET_CPUID
-- Performing Test HAVE_GCC_GET_CPUID - Success
-- Performing Test NO_GCC_EBX_FPIC_BUG
-- Performing Test NO_GCC_EBX_FPIC_BUG - Success
-- Performing Test C_HAS_SSE1_1
-- Performing Test C_HAS_SSE1_1 - Success
-- Performing Test C_HAS_SSE2_1
-- Performing Test C_HAS_SSE2_1 - Success
-- Performing Test C_HAS_SSE3_1
-- Performing Test C_HAS_SSE3_1 - Failed
-- Performing Test C_HAS_SSE3_2
-- Performing Test C_HAS_SSE3_2 - Success
-- Performing Test C_HAS_SSE4_1_1
-- Performing Test C_HAS_SSE4_1_1 - Failed
-- Performing Test C_HAS_SSE4_1_2
-- Performing Test C_HAS_SSE4_1_2 - Success
-- Performing Test C_HAS_SSE4_2_1
-- Performing Test C_HAS_SSE4_2_1 - Failed
-- Performing Test C_HAS_SSE4_2_2
-- Performing Test C_HAS_SSE4_2_2 - Success
-- Performing Test C_HAS_AVX_1
-- Performing Test C_HAS_AVX_1 - Failed
-- Performing Test C_HAS_AVX_2
-- Performing Test C_HAS_AVX_2 - Success
-- Performing Test C_HAS_AVX2_1
-- Performing Test C_HAS_AVX2_1 - Failed
-- Performing Test C_HAS_AVX2_2
-- Performing Test C_HAS_AVX2_2 - Failed
-- Performing Test C_HAS_AVX2_3
-- Performing Test C_HAS_AVX2_3 - Failed
-- Performing Test CXX_HAS_SSE1_1
-- Performing Test CXX_HAS_SSE1_1 - Success
-- Performing Test CXX_HAS_SSE2_1
-- Performing Test CXX_HAS_SSE2_1 - Success
-- Performing Test CXX_HAS_SSE3_1
-- Performing Test CXX_HAS_SSE3_1 - Failed
-- Performing Test CXX_HAS_SSE3_2
-- Performing Test CXX_HAS_SSE3_2 - Success
-- Performing Test CXX_HAS_SSE4_1_1
-- Performing Test CXX_HAS_SSE4_1_1 - Failed
-- Performing Test CXX_HAS_SSE4_1_2
-- Performing Test CXX_HAS_SSE4_1_2 - Success
-- Performing Test CXX_HAS_SSE4_2_1
-- Performing Test CXX_HAS_SSE4_2_1 - Failed
-- Performing Test CXX_HAS_SSE4_2_2
-- Performing Test CXX_HAS_SSE4_2_2 - Success
-- Performing Test CXX_HAS_AVX_1
-- Performing Test CXX_HAS_AVX_1 - Failed
-- Performing Test CXX_HAS_AVX_2
-- Performing Test CXX_HAS_AVX_2 - Success
-- Performing Test CXX_HAS_AVX2_1
-- Performing Test CXX_HAS_AVX2_1 - Failed
-- Performing Test CXX_HAS_AVX2_2
-- Performing Test CXX_HAS_AVX2_2 - Failed
-- Performing Test CXX_HAS_AVX2_3
-- Performing Test CXX_HAS_AVX2_3 - Failed
-- SSE2 Found
-- SSE3 Found
-- AVX Found
-- Performing Test HAS_C11_ATOMICS
-- Performing Test HAS_C11_ATOMICS - Failed
-- Performing Test HAS_MSC_ATOMICS
-- Performing Test HAS_MSC_ATOMICS - Failed
-- Performing Test HAS_GCC_ATOMICS
-- Performing Test HAS_GCC_ATOMICS - Success
-- Atomics: using GCC intrinsics
-- Looking for sys/types.h
-- Looking for sys/types.h - found
-- Looking for stdint.h
-- Looking for stdint.h - found
-- Looking for stddef.h
-- Looking for stddef.h - found
-- Check size of void*
-- Check size of void* - done
-- Checking for [mkl_gf_lp64 - mkl_gnu_thread - mkl_core - gomp - pthread - m - dl]
-- Library mkl_gf_lp64: /home/vladislav/.conda/envs/ds/lib/libmkl_gf_lp64.so
-- Library mkl_gnu_thread: /home/vladislav/.conda/envs/ds/lib/libmkl_gnu_thread.so
-- Library mkl_core: /home/vladislav/.conda/envs/ds/lib/libmkl_core.so
-- Found OpenMP_C: -fopenmp (found version "4.5")
-- Found OpenMP_CXX: -fopenmp (found version "4.5")
-- Library gomp: -fopenmp
-- Library pthread: /usr/lib/x86_64-linux-gnu/libpthread.so
-- Library m: /usr/lib/x86_64-linux-gnu/libm.so
-- Library dl: /usr/lib/x86_64-linux-gnu/libdl.so
-- Looking for cblas_sgemm
-- Looking for cblas_sgemm - found
-- MKL library found
-- Performing Test BLAS_F2C_DOUBLE_WORKS
-- Performing Test BLAS_F2C_DOUBLE_WORKS - Failed
-- Performing Test BLAS_F2C_FLOAT_WORKS
-- Performing Test BLAS_F2C_FLOAT_WORKS - Success
-- Performing Test BLAS_USE_CBLAS_DOT
-- Performing Test BLAS_USE_CBLAS_DOT - Success
-- Found a library with BLAS API (mkl).
-- Found a library with LAPACK API. (mkl)
-- Found CUDNN: /usr/local/cuda/include
-- Found cuDNN: v7.0.3 (include: /usr/local/cuda/include, library: /usr/local/cuda/lib64/libcudnn.so)
CMake Deprecation Warning at src/ATen/CMakeLists.txt:7 (CMAKE_POLICY):
The OLD behavior for policy CMP0026 will be removed from a future version
of CMake.
The cmake-policies(7) manual explains that the OLD behaviors of all
policies are deprecated and that a policy should be set to OLD only under
specific short-term circumstances. Projects should be ported to the NEW
behavior and not rely on setting a policy to OLD.
-- Using python found in /home/vladislav/.conda/envs/ds/bin/python
['/home/vladislav/Desktop/pytorch/aten/src/THNN/generic/THNN.h', '/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/THCUNN.h', '/home/vladislav/Desktop/pytorch/aten/src/ATen/nn.yaml']
ATen Excluded: {'bernoulli', 'bernoulli_'}
-- Looking for clock_gettime in rt
-- Looking for clock_gettime in rt - found
-- Looking for mmap
-- Looking for mmap - found
-- Looking for shm_open
-- Looking for shm_open - found
-- Looking for shm_unlink
-- Looking for shm_unlink - found
-- Looking for malloc_usable_size
-- Looking for malloc_usable_size - found
-- Performing Test C_HAS_THREAD
-- Performing Test C_HAS_THREAD - Success
disable contrib because ATEN_NO_CONTRIB is set
-- Configuring done
-- Generating done
-- Build files have been written to: /home/vladislav/Desktop/pytorch/torch/lib/build/aten
[ 0%] Generating ATen/CPUGenerator.h, ATen/CUDAGenerator.h, ATen/Declarations.yaml, ATen/CPUByteStorage.cpp, ATen/CPUByteStorage.h, ATen/CPUByteType.cpp, ATen/CPUByteType.h, ATen/CPUByteTensor.cpp, ATen/CPUByteTensor.h, ATen/CPUCharStorage.cpp, ATen/CPUCharStorage.h, ATen/CPUCharType.cpp, ATen/CPUCharType.h, ATen/CPUCharTensor.cpp, ATen/CPUCharTensor.h, ATen/CPUDoubleStorage.cpp, ATen/CPUDoubleStorage.h, ATen/CPUDoubleType.cpp, ATen/CPUDoubleType.h, ATen/CPUDoubleTensor.cpp, ATen/CPUDoubleTensor.h, ATen/CPUFloatStorage.cpp, ATen/CPUFloatStorage.h, ATen/CPUFloatType.cpp, ATen/CPUFloatType.h, ATen/CPUFloatTensor.cpp, ATen/CPUFloatTensor.h, ATen/CPUIntStorage.cpp, ATen/CPUIntStorage.h, ATen/CPUIntType.cpp, ATen/CPUIntType.h, ATen/CPUIntTensor.cpp, ATen/CPUIntTensor.h, ATen/CPULongStorage.cpp, ATen/CPULongStorage.h, ATen/CPULongType.cpp, ATen/CPULongType.h, ATen/CPULongTensor.cpp, ATen/CPULongTensor.h, ATen/CPUShortStorage.cpp, ATen/CPUShortStorage.h, ATen/CPUShortType.cpp, ATen/CPUShortType.h, ATen/CPU[ 0%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/ATen_generated_THCBlas.cu.o
[ 0%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/ATen_generated_THCReduceApplyUtils.cu.o
ShortTensor.cpp, ATen/CPUShortTensor.h, ATen/CPUHalfStorage.cpp, ATen/CPUHalfStorage.h, ATen/CPUHalfType.cpp, ATen/CPUHalfType.h, ATen/CPUHalfTensor.cpp, ATen/CPUHalfTensor.h, ATen/SparseCPUByteType.cpp, ATen/SparseCPUByteType.h, ATen/SparseCPUByteTensor.cpp, ATen/SparseCPUByteTensor.h, ATen/SparseCPUCharType.cpp, ATen/SparseCPUCharType.h, ATen/SparseCPUCharTensor.cpp, ATen/SparseCPUCharTensor.h, ATen/SparseCPUDoubleType.cpp, ATen/SparseCPUDoubleType.h, ATen/SparseCPUDoubleTensor.cpp, ATen/SparseCPUDoubleTensor.h, ATen/SparseCPUFloatType.cpp, ATen/SparseCPUFloatType.h, ATen/SparseCPUFloatTensor.cpp, ATen/SparseCPUFloatTensor.h, ATen/SparseCPUIntType.cpp, ATen/SparseCPUIntType.h, ATen/SparseCPUIntTensor.cpp, ATen/SparseCPUIntTensor.h, ATen/SparseCPULongType.cpp, ATen/SparseCPULongType.h, ATen/SparseCPULongTensor.cpp, ATen/SparseCPULongTensor.h, ATen/SparseCPUShortType.cpp, ATen/SparseCPUShortType.h, ATen/SparseCPUShortTensor.cpp, ATen/SparseCPUShortTensor.h, ATen/CUDAByteStorage.cpp, ATen/CUDAByteStorage.h, ATen/CUDAByteType.cpp, ATen/CUDAByteType.h, ATen/CUDAByteTensor.cpp, ATen/CUDAByteTensor.h, ATen/CUDACharStorage.cpp, ATen/CUDACharStorage.h, ATen/CUDACharType.cpp, ATen/CUDACharType.h, ATen/CUDACharTensor.cpp, ATen/CUDACharTensor.h, ATen/CUDADoubleStorage.cpp, ATen/CUDADoubleStorage.h, ATen/CUDADoubleType.cpp, ATen/CUDADoubleType.h, ATen/CUDADoubleTensor.cpp, ATen/CUDADoubleTensor.h, ATen/CUDAFloatStorage.cpp, ATen/CUDAFloatStorage.h, ATen/CUDAFloatType.cpp, ATen/CUDAFloatType.h, ATen/CUDAFloatTensor.cpp, ATen/CUDAFloatTensor.h, ATen/CUDAIntStorage.cpp, ATen/CUDAIntStorage.h, ATen/CUDAIntType.cpp, ATen/CUDAIntType.h, ATen/CUDAIntTensor.cpp, ATen/CUDAIntTensor.h, ATen/CUDALongStorage.cpp, ATen/CUDALongStorage.h, ATen/CUDALongType.cpp, ATen/CUDALongType.h, ATen/CUDALongTensor.cpp, ATen/CUDALongTensor.h, ATen/CUDAShortStorage.cpp, ATen/CUDAShortStorage.h, ATen/CUDAShortType.cpp, ATen/CUDAShortType.h, ATen/CUDAShortTensor.cpp, ATen/CUDAShortTensor.h, ATen/CUDAHalfStorage.cpp, ATen/CUDAHalfStorage.h, ATen/CUDAHalfType.cpp, ATen/CUDAHalfType.h, ATen/CUDAHalfTensor.cpp, ATen/CUDAHalfTensor.h, ATen/SparseCUDAByteType.cpp, ATen/SparseCUDAByteType.h, ATen/SparseCUDAByteTensor.cpp, ATen/SparseCUDAByteTensor.h, ATen/SparseCUDACharType.cpp, ATen/SparseCUDACharType.h, ATen/SparseCUDACharTensor.cpp, ATen/SparseCUDACharTensor.h, ATen/SparseCUDADoubleType.cpp, ATen/SparseCUDADoubleType.h, ATen/SparseCUDADoubleTensor.cpp, ATen/SparseCUDADoubleTensor.h, ATen/SparseCUDAFloatType.cpp, ATen/SparseCUDAFloatType.h, ATen/SparseCUDAFloatTensor.cpp, ATen/SparseCUDAFloatTensor.h, ATen/SparseCUDAIntType.cpp, ATen/SparseCUDAIntType.h, ATen/SparseCUDAIntTensor.cpp, ATen/SparseCUDAIntTensor.h, ATen/SparseCUDALongType.cpp, ATen/SparseCUDALongType.h, ATen/SparseCUDALongTensor.cpp, ATen/SparseCUDALongTensor.h, ATen/SparseCUDAShortType.cpp, ATen/SparseCUDAShortType.h, ATen/SparseCUDAShortTensor.cpp, ATen/SparseCUDAShortTensor.h, ATen/Type.h, ATen/Type.cpp, ATen/Tensor.h, ATen/TensorMethods.h, ATen/Functions.h, ATen/Dispatch.h, ATen/Copy.cpp, ATen/NativeFunctions.h
[ 1%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/ATen_generated_THCSleep.cu.o
[ 1%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/ATen_generated_THCStorage.cu.o
[ 1%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/ATen_generated_THCStorageCopy.cu.o
[ 2%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/ATen_generated_THCTensor.cu.o
[ 2%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/ATen_generated_THCTensorCopy.cu.o
['/home/vladislav/Desktop/pytorch/aten/src/THNN/generic/THNN.h', '/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/THCUNN.h', '/home/vladislav/Desktop/pytorch/aten/src/ATen/nn.yaml']
[ 2%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/ATen_generated_THCTensorMath.cu.o
/home/vladislav/Desktop/pytorch/aten/src/THC/THCBlas.cu: In function ‘void THCudaBlas_Sgemv(THCState*, char, int64_t, int64_t, float, float*, int64_t, float*, int64_t, float, float*, int64_t)’:
/home/vladislav/Desktop/pytorch/aten/src/THC/THCBlas.cu:105:16: warning: ‘op’ may be used uninitialized in this function [-Wmaybe-uninitialized]
THCublasCheck(cublasSgemv(handle, op, i_m, i_n, &alpha, a, i_lda, x, i_incx, &beta, y, i_incy));
~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THC/THCBlas.cu: In function ‘void THCudaBlas_Dgemv(THCState*, char, int64_t, int64_t, double, double*, int64_t, double*, int64_t, double, double*, int64_t)’:
/home/vladislav/Desktop/pytorch/aten/src/THC/THCBlas.cu:135:16: warning: ‘op’ may be used uninitialized in this function [-Wmaybe-uninitialized]
THCublasCheck(cublasDgemv(handle, op, i_m, i_n, &alpha, a, i_lda, x, i_incx, &beta, y, i_incy));
~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
ATen Excluded: {'bernoulli_', 'bernoulli'}
[ 3%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/ATen_generated_THCTensorMath2.cu.o
[ 3%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/ATen_generated_THCTensorMathBlas.cu.o
[ 3%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/ATen_generated_THCTensorMathMagma.cu.o
[ 4%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/ATen_generated_THCTensorMathPairwise.cu.o
[ 4%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/ATen_generated_THCTensorMathReduce.cu.o
[ 4%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/ATen_generated_THCTensorMathScan.cu.o
[ 5%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/ATen_generated_THCTensorIndex.cu.o
[ 5%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/ATen_generated_THCTensorConv.cu.o
[ 5%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/ATen_generated_THCTensorRandom.cu.o
[ 6%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/ATen_generated_THCTensorScatterGather.cu.o
[ 6%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/ATen_generated_THCTensorTopK.cu.o
[ 6%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/ATen_generated_THCTensorSort.cu.o
[ 7%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/ATen_generated_THCTensorTypeUtils.cu.o
[ 7%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/ATen_generated_THCSortUtils.cu.o
[ 7%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/ATen_generated_THCTensorMode.cu.o
[ 8%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorSortByte.cu.o
[ 8%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMathCompareTByte.cu.o
[ 8%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMathPointwiseByte.cu.o
[ 9%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMathCompareByte.cu.o
[ 9%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMathReduceByte.cu.o
[ 9%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMaskedByte.cu.o
[ 10%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorSortChar.cu.o
[ 10%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMathCompareTChar.cu.o
[ 10%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMathPointwiseChar.cu.o
[ 11%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMathCompareChar.cu.o
[ 11%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMathReduceChar.cu.o
[ 11%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMaskedChar.cu.o
[ 12%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorSortShort.cu.o
[ 12%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMathCompareTShort.cu.o
[ 13%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMathPointwiseShort.cu.o
[ 13%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMathCompareShort.cu.o
[ 13%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMathReduceShort.cu.o
[ 14%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMaskedShort.cu.o
[ 14%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorSortInt.cu.o
[ 14%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMathCompareTInt.cu.o
[ 15%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMathPointwiseInt.cu.o
[ 15%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMathCompareInt.cu.o
[ 15%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMathReduceInt.cu.o
[ 16%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMaskedInt.cu.o
[ 16%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorSortLong.cu.o
[ 16%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMathCompareTLong.cu.o
[ 17%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMathPointwiseLong.cu.o
[ 17%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMathCompareLong.cu.o
[ 17%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMathReduceLong.cu.o
[ 18%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMaskedLong.cu.o
[ 18%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorSortHalf.cu.o
[ 18%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMathCompareTHalf.cu.o
[ 19%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMathPointwiseHalf.cu.o
[ 19%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMathCompareHalf.cu.o
[ 19%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMathReduceHalf.cu.o
[ 20%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMaskedHalf.cu.o
[ 20%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorSortFloat.cu.o
[ 20%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMathCompareTFloat.cu.o
[ 21%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMathPointwiseFloat.cu.o
[ 21%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMathCompareFloat.cu.o
[ 21%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMathReduceFloat.cu.o
[ 22%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMaskedFloat.cu.o
[ 22%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorSortDouble.cu.o
[ 22%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMathCompareTDouble.cu.o
[ 23%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMathPointwiseDouble.cu.o
[ 23%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMathCompareDouble.cu.o
[ 23%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMathReduceDouble.cu.o
[ 24%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMaskedDouble.cu.o
[ 24%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/ATen_generated_THCHalf.cu.o
[ 25%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_AbsCriterion.cu.o
[ 25%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_Abs.cu.o
[ 25%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_BatchNormalization.cu.o
[ 26%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_BCECriterion.cu.o
[ 26%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_ClassNLLCriterion.cu.o
[ 26%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_DistKLDivCriterion.cu.o
[ 27%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_ELU.cu.o
[ 27%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_FeatureLPPooling.cu.o
[ 27%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_FusedRNNKernel.cu.o
[ 28%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_GatedLinearUnit.cu.o
[ 28%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_HardTanh.cu.o
[ 28%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_IndexLinear.cu.o
[ 29%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_L1Cost.cu.o
[ 29%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_LeakyReLU.cu.o
[ 29%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_LogSigmoid.cu.o
[ 30%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_LogSoftMax.cu.o
[ 30%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_LookupTableBag.cu.o
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/FusedRNNKernel.cu: In instantiation of ‘void THNN_CudaHalfLSTM_forw_ind_wrap(THCState*, THCudaHalfTensor*, THCudaHalfTensor*, THCudaHalfTensor*, THCudaHalfTensor*, THCudaHalfTensor*, THCudaHalfTensor*, THCudaHalfTensor*) [with INDTYPE = long unsigned int; THCState = THCState; THCudaHalfTensor = THCudaHalfTensor]’:
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/FusedRNNKernel.cu:595:96: required from here
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/FusedRNNKernel.cu:536:28: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
THAssertMsg( hid_size*4 == THCTensor_(nElement)(state, bias1) &&
~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/FusedRNNKernel.cu:536:91: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
THAssertMsg( hid_size*4 == THCTensor_(nElement)(state, bias1) &&
^
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/FusedRNNKernel.cu: In instantiation of ‘void THNN_CudaHalfGRU_forw_ind_wrap(THCState*, THCudaHalfTensor*, THCudaHalfTensor*, THCudaHalfTensor*, THCudaHalfTensor*, THCudaHalfTensor*, THCudaHalfTensor*, THCudaHalfTensor*) [with INDTYPE = long unsigned int; THCState = THCState; THCudaHalfTensor = THCudaHalfTensor]’:
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/FusedRNNKernel.cu:795:100: required from here
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/FusedRNNKernel.cu:731:28: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
THAssertMsg( hid_size*3 == THCTensor_(nElement)(state, bias1) &&
~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/FusedRNNKernel.cu:731:91: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
THAssertMsg( hid_size*3 == THCTensor_(nElement)(state, bias1) &&
^
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/FusedRNNKernel.cu: In instantiation of ‘void THNN_CudaLSTM_forw_ind_wrap(THCState*, THCudaTensor*, THCudaTensor*, THCudaTensor*, THCudaTensor*, THCudaTensor*, THCudaTensor*, THCudaTensor*) [with INDTYPE = long unsigned int; THCState = THCState; THCudaTensor = THCudaTensor]’:
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/FusedRNNKernel.cu:595:92: required from here
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/FusedRNNKernel.cu:536:28: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
THAssertMsg( hid_size*4 == THCTensor_(nElement)(state, bias1) &&
~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/FusedRNNKernel.cu:536:87: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
THAssertMsg( hid_size*4 == THCTensor_(nElement)(state, bias1) &&
^
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/FusedRNNKernel.cu: In instantiation of ‘void THNN_CudaGRU_forw_ind_wrap(THCState*, THCudaTensor*, THCudaTensor*, THCudaTensor*, THCudaTensor*, THCudaTensor*, THCudaTensor*, THCudaTensor*) [with INDTYPE = long unsigned int; THCState = THCState; THCudaTensor = THCudaTensor]’:
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/FusedRNNKernel.cu:795:96: required from here
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/FusedRNNKernel.cu:731:28: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
THAssertMsg( hid_size*3 == THCTensor_(nElement)(state, bias1) &&
~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/FusedRNNKernel.cu:731:87: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
THAssertMsg( hid_size*3 == THCTensor_(nElement)(state, bias1) &&
^
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/FusedRNNKernel.cu: In instantiation of ‘void THNN_CudaDoubleLSTM_forw_ind_wrap(THCState*, THCudaDoubleTensor*, THCudaDoubleTensor*, THCudaDoubleTensor*, THCudaDoubleTensor*, THCudaDoubleTensor*, THCudaDoubleTensor*, THCudaDoubleTensor*) [with INDTYPE = long unsigned int; THCState = THCState; THCudaDoubleTensor = THCudaDoubleTensor]’:
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/FusedRNNKernel.cu:595:98: required from here
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/FusedRNNKernel.cu:536:28: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
THAssertMsg( hid_size*4 == THCTensor_(nElement)(state, bias1) &&
~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/FusedRNNKernel.cu:536:93: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
THAssertMsg( hid_size*4 == THCTensor_(nElement)(state, bias1) &&
^
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/FusedRNNKernel.cu: In instantiation of ‘void THNN_CudaDoubleGRU_forw_ind_wrap(THCState*, THCudaDoubleTensor*, THCudaDoubleTensor*, THCudaDoubleTensor*, THCudaDoubleTensor*, THCudaDoubleTensor*, THCudaDoubleTensor*, THCudaDoubleTensor*) [with INDTYPE = long unsigned int; THCState = THCState; THCudaDoubleTensor = THCudaDoubleTensor]’:
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/FusedRNNKernel.cu:795:102: required from here
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/FusedRNNKernel.cu:731:28: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
THAssertMsg( hid_size*3 == THCTensor_(nElement)(state, bias1) &&
~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/FusedRNNKernel.cu:731:93: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
THAssertMsg( hid_size*3 == THCTensor_(nElement)(state, bias1) &&
^
[ 30%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_LookupTable.cu.o
[ 31%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_MarginCriterion.cu.o
[ 31%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_MSECriterion.cu.o
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/LookupTable.cu(25): warning: function "__shfl(int, int, int)"
/usr/local/cuda/include/sm_30_intrinsics.hpp(152): here was declared deprecated ("__shfl() is deprecated in favor of __shfl_sync() and may be removed in a future release (Use -Wno-deprecated-declarations to suppress this warning).")
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/LookupTable.cu(42): warning: function "__any"
/usr/local/cuda/include/device_atomic_functions.h(180): here was declared deprecated ("__any() is deprecated in favor of __any_sync() and may be removed in a future release (Use -Wno-deprecated-declarations to suppress this warning).")
[ 31%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_MultiLabelMarginCriterion.cu.o
[ 32%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_MultiMarginCriterion.cu.o
[ 32%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_PReLU.cu.o
[ 32%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_RReLU.cu.o
[ 33%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_Sigmoid.cu.o
[ 33%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_SmoothL1Criterion.cu.o
[ 33%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_SoftMarginCriterion.cu.o
[ 34%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_SoftMax.cu.o
[ 34%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_SoftPlus.cu.o
[ 34%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_SoftShrink.cu.o
[ 35%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_SparseLinear.cu.o
[ 35%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_SpatialAdaptiveAveragePooling.cu.o
[ 35%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_SpatialAdaptiveMaxPooling.cu.o
[ 36%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_SpatialAveragePooling.cu.o
[ 36%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_SpatialClassNLLCriterion.cu.o
[ 36%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_SpatialConvolutionLocal.cu.o
[ 37%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_SpatialConvolutionMM.cu.o
[ 37%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_SpatialCrossMapLRN.cu.o
[ 38%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_SpatialDepthwiseConvolution.cu.o
[ 38%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_SpatialDilatedConvolution.cu.o
[ 38%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_SpatialDilatedMaxPooling.cu.o
[ 39%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_SpatialFractionalMaxPooling.cu.o
[ 39%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_SpatialFullConvolution.cu.o
[ 39%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_SpatialFullDilatedConvolution.cu.o
[ 40%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_SpatialGridSamplerBilinear.cu.o
[ 40%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_SpatialMaxPooling.cu.o
[ 40%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_SpatialMaxUnpooling.cu.o
[ 41%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_SpatialReflectionPadding.cu.o
[ 41%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_SpatialReplicationPadding.cu.o
[ 41%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_SpatialSubSampling.cu.o
[ 42%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_SpatialUpSamplingBilinear.cu.o
[ 42%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_SpatialUpSamplingNearest.cu.o
[ 42%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_Sqrt.cu.o
[ 43%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_Square.cu.o
[ 43%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_Tanh.cu.o
[ 43%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_TemporalConvolution.cu.o
[ 44%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_TemporalMaxPooling.cu.o
[ 44%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_TemporalReflectionPadding.cu.o
[ 44%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_TemporalReplicationPadding.cu.o
[ 45%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_TemporalRowConvolution.cu.o
[ 45%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_TemporalUpSamplingLinear.cu.o
[ 45%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_TemporalUpSamplingNearest.cu.o
[ 46%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_Threshold.cu.o
[ 46%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_VolumetricAdaptiveAveragePooling.cu.o
[ 46%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_VolumetricAdaptiveMaxPooling.cu.o
[ 47%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_VolumetricAveragePooling.cu.o
[ 47%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_VolumetricConvolution.cu.o
[ 47%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_VolumetricDilatedConvolution.cu.o
[ 48%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_VolumetricDilatedMaxPooling.cu.o
[ 48%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_VolumetricFractionalMaxPooling.cu.o
[ 48%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_VolumetricFullConvolution.cu.o
[ 49%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_VolumetricFullDilatedConvolution.cu.o
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricAveragePooling.cu: In function ‘void THNN_CudaHalfVolumetricAveragePooling_updateGradInput(THCState*, THCudaHalfTensor*, THCudaHalfTensor*, THCudaHalfTensor*, int, int, int, int, int, int, int, int, int, bool, bool)’:
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricAveragePooling.cu:98:45: warning: ‘inputWidth’ may be used uninitialized in this function [-Wmaybe-uninitialized]
if ((outputWidth - 1)*dW >= inputWidth + padW)
~~~~~~~~~~~~^~~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricAveragePooling.cu:17:5: note: ‘inputWidth’ was declared here
int inputWidth;
^~~~~~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricAveragePooling.cu:96:47: warning: ‘inputHeight’ may be used uninitialized in this function [-Wmaybe-uninitialized]
if ((outputHeight - 1)*dH >= inputHeight + padH)
~~~~~~~~~~~~~^~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricAveragePooling.cu:16:5: note: ‘inputHeight’ was declared here
int inputHeight;
^~~~~~~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricAveragePooling.cu:94:43: warning: ‘inputTime’ may be used uninitialized in this function [-Wmaybe-uninitialized]
if ((outputTime - 1)*dT >= inputTime + padT)
~~~~~~~~~~~^~~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricAveragePooling.cu:15:5: note: ‘inputTime’ was declared here
int inputTime;
^~~~~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricAveragePooling.cu:104:431: warning: ‘inputSlices’ may be used uninitialized in this function [-Wmaybe-uninitialized]
THCUNN_check_dim_size(state, gradOutput, ndim, dimN, inputSlices);
^
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricAveragePooling.cu:14:5: note: ‘inputSlices’ was declared here
int inputSlices;
^~~~~~~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricAveragePooling.cu: In function ‘void THNN_CudaVolumetricAveragePooling_updateGradInput(THCState*, THCudaTensor*, THCudaTensor*, THCudaTensor*, int, int, int, int, int, int, int, int, int, bool, bool)’:
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricAveragePooling.cu:98:45: warning: ‘inputWidth’ may be used uninitialized in this function [-Wmaybe-uninitialized]
if ((outputWidth - 1)*dW >= inputWidth + padW)
~~~~~~~~~~~~^~~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricAveragePooling.cu:17:5: note: ‘inputWidth’ was declared here
int inputWidth;
^~~~~~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricAveragePooling.cu:96:47: warning: ‘inputHeight’ may be used uninitialized in this function [-Wmaybe-uninitialized]
if ((outputHeight - 1)*dH >= inputHeight + padH)
~~~~~~~~~~~~~^~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricAveragePooling.cu:16:5: note: ‘inputHeight’ was declared here
int inputHeight;
^~~~~~~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricAveragePooling.cu:94:43: warning: ‘inputTime’ may be used uninitialized in this function [-Wmaybe-uninitialized]
if ((outputTime - 1)*dT >= inputTime + padT)
~~~~~~~~~~~^~~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricAveragePooling.cu:15:5: note: ‘inputTime’ was declared here
int inputTime;
^~~~~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricAveragePooling.cu:104:419: warning: ‘inputSlices’ may be used uninitialized in this function [-Wmaybe-uninitialized]
THCUNN_check_dim_size(state, gradOutput, ndim, dimN, inputSlices);
^
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricAveragePooling.cu:14:5: note: ‘inputSlices’ was declared here
int inputSlices;
^~~~~~~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricAveragePooling.cu: In function ‘void THNN_CudaDoubleVolumetricAveragePooling_updateGradInput(THCState*, THCudaDoubleTensor*, THCudaDoubleTensor*, THCudaDoubleTensor*, int, int, int, int, int, int, int, int, int, bool, bool)’:
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricAveragePooling.cu:98:45: warning: ‘inputWidth’ may be used uninitialized in this function [-Wmaybe-uninitialized]
if ((outputWidth - 1)*dW >= inputWidth + padW)
~~~~~~~~~~~~^~~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricAveragePooling.cu:17:5: note: ‘inputWidth’ was declared here
int inputWidth;
^~~~~~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricAveragePooling.cu:96:47: warning: ‘inputHeight’ may be used uninitialized in this function [-Wmaybe-uninitialized]
if ((outputHeight - 1)*dH >= inputHeight + padH)
~~~~~~~~~~~~~^~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricAveragePooling.cu:16:5: note: ‘inputHeight’ was declared here
int inputHeight;
^~~~~~~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricAveragePooling.cu:94:43: warning: ‘inputTime’ may be used uninitialized in this function [-Wmaybe-uninitialized]
if ((outputTime - 1)*dT >= inputTime + padT)
~~~~~~~~~~~^~~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricAveragePooling.cu:15:5: note: ‘inputTime’ was declared here
int inputTime;
^~~~~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricAveragePooling.cu:104:437: warning: ‘inputSlices’ may be used uninitialized in this function [-Wmaybe-uninitialized]
THCUNN_check_dim_size(state, gradOutput, ndim, dimN, inputSlices);
^
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricAveragePooling.cu:14:5: note: ‘inputSlices’ was declared here
int inputSlices;
^~~~~~~~~~~
[ 49%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_VolumetricMaxPooling.cu.o
[ 50%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_VolumetricMaxUnpooling.cu.o
[ 50%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_VolumetricReplicationPadding.cu.o
[ 50%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_VolumetricUpSamplingNearest.cu.o
[ 51%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_VolumetricUpSamplingTrilinear.cu.o
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricDilatedMaxPooling.cu: In function ‘void THNN_CudaHalfVolumetricDilatedMaxPooling_shapeCheck(THCState*, THCudaHalfTensor*, THCudaHalfTensor*, THCudaLongTensor*, int, int, int, int, int, int, int, int, int, int, int, int, bool)’:
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricDilatedMaxPooling.cu:98:45: warning: ‘inputWidth’ may be used uninitialized in this function [-Wmaybe-uninitialized]
if ((outputWidth - 1)*dW >= inputWidth + padW)
~~~~~~~~~~~~^~~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricDilatedMaxPooling.cu:26:5: note: ‘inputWidth’ was declared here
int inputWidth;
^~~~~~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricDilatedMaxPooling.cu:96:47: warning: ‘inputHeight’ may be used uninitialized in this function [-Wmaybe-uninitialized]
if ((outputHeight - 1)*dH >= inputHeight + padH)
~~~~~~~~~~~~~^~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricDilatedMaxPooling.cu:25:5: note: ‘inputHeight’ was declared here
int inputHeight;
^~~~~~~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricDilatedMaxPooling.cu:94:43: warning: ‘inputTime’ may be used uninitialized in this function [-Wmaybe-uninitialized]
if ((outputTime - 1)*dT >= inputTime + padT)
~~~~~~~~~~~^~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricDilatedMaxPooling.cu:24:5: note: ‘inputTime’ was declared here
int inputTime;
^~~~~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricDilatedMaxPooling.cu:113:416: warning: ‘inputSlices’ may be used uninitialized in this function [-Wmaybe-uninitialized]
THCUNN_check_dim_size_indices(state, indices, ndim, dimf, inputSlices);
^
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricDilatedMaxPooling.cu:23:5: note: ‘inputSlices’ was declared here
int inputSlices;
^~~~~~~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricDilatedMaxPooling.cu: In function ‘void THNN_CudaVolumetricDilatedMaxPooling_shapeCheck(THCState*, THCudaTensor*, THCudaTensor*, THCudaLongTensor*, int, int, int, int, int, int, int, int, int, int, int, int, bool)’:
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricDilatedMaxPooling.cu:98:45: warning: ‘inputWidth’ may be used uninitialized in this function [-Wmaybe-uninitialized]
if ((outputWidth - 1)*dW >= inputWidth + padW)
~~~~~~~~~~~~^~~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricDilatedMaxPooling.cu:26:5: note: ‘inputWidth’ was declared here
int inputWidth;
^~~~~~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricDilatedMaxPooling.cu:96:47: warning: ‘inputHeight’ may be used uninitialized in this function [-Wmaybe-uninitialized]
if ((outputHeight - 1)*dH >= inputHeight + padH)
~~~~~~~~~~~~~^~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricDilatedMaxPooling.cu:25:5: note: ‘inputHeight’ was declared here
int inputHeight;
^~~~~~~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricDilatedMaxPooling.cu:94:43: warning: ‘inputTime’ may be used uninitialized in this function [-Wmaybe-uninitialized]
if ((outputTime - 1)*dT >= inputTime + padT)
~~~~~~~~~~~^~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricDilatedMaxPooling.cu:24:5: note: ‘inputTime’ was declared here
int inputTime;
^~~~~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricDilatedMaxPooling.cu:113:416: warning: ‘inputSlices’ may be used uninitialized in this function [-Wmaybe-uninitialized]
THCUNN_check_dim_size_indices(state, indices, ndim, dimf, inputSlices);
^
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricDilatedMaxPooling.cu:23:5: note: ‘inputSlices’ was declared here
int inputSlices;
^~~~~~~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricDilatedMaxPooling.cu: In function ‘void THNN_CudaDoubleVolumetricDilatedMaxPooling_shapeCheck(THCState*, THCudaDoubleTensor*, THCudaDoubleTensor*, THCudaLongTensor*, int, int, int, int, int, int, int, int, int, int, int, int, bool)’:
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricDilatedMaxPooling.cu:98:45: warning: ‘inputWidth’ may be used uninitialized in this function [-Wmaybe-uninitialized]
if ((outputWidth - 1)*dW >= inputWidth + padW)
~~~~~~~~~~~~^~~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricDilatedMaxPooling.cu:26:5: note: ‘inputWidth’ was declared here
int inputWidth;
^~~~~~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricDilatedMaxPooling.cu:96:47: warning: ‘inputHeight’ may be used uninitialized in this function [-Wmaybe-uninitialized]
if ((outputHeight - 1)*dH >= inputHeight + padH)
~~~~~~~~~~~~~^~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricDilatedMaxPooling.cu:25:5: note: ‘inputHeight’ was declared here
int inputHeight;
^~~~~~~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricDilatedMaxPooling.cu:94:43: warning: ‘inputTime’ may be used uninitialized in this function [-Wmaybe-uninitialized]
if ((outputTime - 1)*dT >= inputTime + padT)
~~~~~~~~~~~^~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricDilatedMaxPooling.cu:24:5: note: ‘inputTime’ was declared here
int inputTime;
^~~~~~~~~
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricDilatedMaxPooling.cu:113:416: warning: ‘inputSlices’ may be used uninitialized in this function [-Wmaybe-uninitialized]
THCUNN_check_dim_size_indices(state, indices, ndim, dimf, inputSlices);
^
/home/vladislav/Desktop/pytorch/aten/src/THCUNN/generic/VolumetricDilatedMaxPooling.cu:23:5: note: ‘inputSlices’ was declared here
int inputSlices;
^~~~~~~~~~~
[ 51%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCS/ATen_generated_THCSTensor.cu.o
[ 51%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCS/ATen_generated_THCSparse.cu.o
[ 52%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/native/cuda/ATen_generated_NativeFunctionsCuda.cu.o
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>}; bool <anonymous> = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’:
/usr/include/c++/6/tuple:626:248: required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor&, at::Tensor&, at::Tensor&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor&, at::Tensor&, at::Tensor&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> = <missing>]’
/home/vladislav/Desktop/pytorch/torch/lib/build/aten/src/ATen/ATen/Functions.h:1393:61: required from here
/usr/include/c++/6/tuple:483:67: error: mismatched argument pack lengths while expanding ‘std::is_constructible<_Elements, _UElements&&>’
return __and_<is_constructible<_Elements, _UElements&&>...>::value;
^~~~~
/usr/include/c++/6/tuple:484:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>}; bool <anonymous> = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’ not a return-statement
}
^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>}; bool <anonymous> = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’:
/usr/include/c++/6/tuple:626:362: required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor&, at::Tensor&, at::Tensor&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor&, at::Tensor&, at::Tensor&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> = <missing>]’
/home/vladislav/Desktop/pytorch/torch/lib/build/aten/src/ATen/ATen/Functions.h:1393:61: required from here
/usr/include/c++/6/tuple:489:65: error: mismatched argument pack lengths while expanding ‘std::is_convertible<_UElements&&, _Elements>’
return __and_<is_convertible<_UElements&&, _Elements>...>::value;
^~~~~
/usr/include/c++/6/tuple:490:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>}; bool <anonymous> = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’ not a return-statement
}
^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>&; bool <anonymous> = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’:
/usr/include/c++/6/tuple:662:419: required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor&, at::Tensor&, at::Tensor&>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(const std::tuple<_Args1 ...>&) [with _UElements = {at::Tensor&, at::Tensor&, at::Tensor&}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor&, at::Tensor&, at::Tensor&>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> = <missing>]’
/home/vladislav/Desktop/pytorch/torch/lib/build/aten/src/ATen/ATen/Functions.h:1393:61: required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (4, should be 2)
return __and_<__not_<is_same<tuple<_Elements...>,
^
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
struct is_convertible
^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>&; bool <anonymous> = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’ not a return-statement
}
^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>&&; bool <anonymous> = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’:
/usr/include/c++/6/tuple:686:422: required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor&, at::Tensor&, at::Tensor&>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(std::tuple<_Args1 ...>&&) [with _UElements = {at::Tensor&, at::Tensor&, at::Tensor&}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor&, at::Tensor&, at::Tensor&>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> = <missing>]’
/home/vladislav/Desktop/pytorch/torch/lib/build/aten/src/ATen/ATen/Functions.h:1393:61: required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (4, should be 2)
return __and_<__not_<is_same<tuple<_Elements...>,
^
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
struct is_convertible
^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>&&; bool <anonymous> = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’ not a return-statement
}
^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:626:248: required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> = <missing>]’
/home/vladislav/Desktop/pytorch/torch/lib/build/aten/src/ATen/ATen/Functions.h:1396:39: required from here
/usr/include/c++/6/tuple:483:67: error: mismatched argument pack lengths while expanding ‘std::is_constructible<_Elements, _UElements&&>’
return __and_<is_constructible<_Elements, _UElements&&>...>::value;
^~~~~
/usr/include/c++/6/tuple:484:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
}
^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:626:362: required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> = <missing>]’
/home/vladislav/Desktop/pytorch/torch/lib/build/aten/src/ATen/ATen/Functions.h:1396:39: required from here
/usr/include/c++/6/tuple:489:65: error: mismatched argument pack lengths while expanding ‘std::is_convertible<_UElements&&, _Elements>’
return __and_<is_convertible<_UElements&&, _Elements>...>::value;
^~~~~
/usr/include/c++/6/tuple:490:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
}
^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor, at::Tensor, at::Tensor>&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:662:419: required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(const std::tuple<_Args1 ...>&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> = <missing>]’
/home/vladislav/Desktop/pytorch/torch/lib/build/aten/src/ATen/ATen/Functions.h:1396:39: required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (4, should be 2)
return __and_<__not_<is_same<tuple<_Elements...>,
^
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
struct is_convertible
^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor, at::Tensor, at::Tensor>&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
}
^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor>&&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:686:422: required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(std::tuple<_Args1 ...>&&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> = <missing>]’
/home/vladislav/Desktop/pytorch/torch/lib/build/aten/src/ATen/ATen/Functions.h:1396:39: required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (4, should be 2)
return __and_<__not_<is_same<tuple<_Elements...>,
^
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
struct is_convertible
^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor>&&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
}
^
CMake Error at ATen_generated_NativeFunctionsCuda.cu.o.cmake:267 (message):
Error generating file
/home/vladislav/Desktop/pytorch/torch/lib/build/aten/src/ATen/CMakeFiles/ATen.dir/native/cuda/./ATen_generated_NativeFunctionsCuda.cu.o
src/ATen/CMakeFiles/ATen.dir/build.make:1120: recipe for target 'src/ATen/CMakeFiles/ATen.dir/native/cuda/ATen_generated_NativeFunctionsCuda.cu.o' failed
make[2]: *** [src/ATen/CMakeFiles/ATen.dir/native/cuda/ATen_generated_NativeFunctionsCuda.cu.o] Error 1
make[2]: *** Waiting for unfinished jobs....
CMakeFiles/Makefile2:193: recipe for target 'src/ATen/CMakeFiles/ATen.dir/all' failed
make[1]: *** [src/ATen/CMakeFiles/ATen.dir/all] Error 2
Makefile:129: recipe for target 'all' failed
make: *** [all] Error 2
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment