Skip to content

Instantly share code, notes, and snippets.

@dylanthomas
Created January 5, 2018 00:17
Show Gist options
  • Save dylanthomas/2990c25689418652b5ac5a5af9607cc8 to your computer and use it in GitHub Desktop.
Save dylanthomas/2990c25689418652b5ac5a5af9607cc8 to your computer and use it in GitHub Desktop.
running install
running build_deps
-- Building for: Visual Studio 15 2017
-- The C compiler identification is MSVC 19.12.25830.2
-- The CXX compiler identification is MSVC 19.12.25830.2
-- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.12.25827/bin/Hostx86/x86/cl.exe
-- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.12.25827/bin/Hostx86/x86/cl.exe -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.12.25827/bin/Hostx86/x86/cl.exe
-- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.12.25827/bin/Hostx86/x86/cl.exe -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
CMake Warning (dev) at cmake/FindCUDA/FindCUDA.cmake:494 (if):
Policy CMP0054 is not set: Only interpret if() arguments as variables or
keywords when unquoted. Run "cmake --help-policy CMP0054" for policy
details. Use the cmake_policy command to set the policy and suppress this
warning.
Quoted variables like "MSVC" will no longer be dereferenced when the policy
is set to NEW. Since the policy is not set the OLD behavior will be used.
Call Stack (most recent call first):
CMakeLists.txt:42 (FIND_PACKAGE)
This warning is for project developers. Use -Wno-dev to suppress it.
-- Found CUDA: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v9.0 (found suitable version "9.0", minimum required is "5.5")
-- Automatic GPU detection failed. Building for common architectures.
-- Autodetected CUDA architecture(s): 3.0;3.5;5.0;5.2;6.0;6.1;6.1+PTX
-- Found CUDA with FP16 support, compiling with torch.CudaHalfTensor
-- Removing -DNDEBUG from compile flags
-- Found OpenMP_C: -openmp (found version "2.0")
-- Found OpenMP_CXX: -openmp (found version "2.0")
-- Found OpenMP: TRUE (found version "2.0")
-- Compiling with OpenMP support
-- MAGMA not found. Compiling without MAGMA support
-- Could not find hardware support for NEON on this machine.
-- No OMAP3 processor on this machine.
-- No OMAP4 processor on this machine.
-- Looking for cpuid.h
-- Looking for cpuid.h - not found
-- Performing Test NO_GCC_EBX_FPIC_BUG
-- Performing Test NO_GCC_EBX_FPIC_BUG - Failed
-- Performing Test C_HAS_SSE1_1
-- Performing Test C_HAS_SSE1_1 - Success
-- Performing Test C_HAS_SSE2_1
-- Performing Test C_HAS_SSE2_1 - Success
-- Performing Test C_HAS_SSE3_1
-- Performing Test C_HAS_SSE3_1 - Success
-- Performing Test C_HAS_SSE4_1_1
-- Performing Test C_HAS_SSE4_1_1 - Success
-- Performing Test C_HAS_SSE4_2_1
-- Performing Test C_HAS_SSE4_2_1 - Success
-- Performing Test C_HAS_AVX_1
-- Performing Test C_HAS_AVX_1 - Success
-- Performing Test C_HAS_AVX2_1
-- Performing Test C_HAS_AVX2_1 - Success
-- Performing Test CXX_HAS_SSE1_1
-- Performing Test CXX_HAS_SSE1_1 - Success
-- Performing Test CXX_HAS_SSE2_1
-- Performing Test CXX_HAS_SSE2_1 - Success
-- Performing Test CXX_HAS_SSE3_1
-- Performing Test CXX_HAS_SSE3_1 - Success
-- Performing Test CXX_HAS_SSE4_1_1
-- Performing Test CXX_HAS_SSE4_1_1 - Success
-- Performing Test CXX_HAS_SSE4_2_1
-- Performing Test CXX_HAS_SSE4_2_1 - Success
-- Performing Test CXX_HAS_AVX_1
-- Performing Test CXX_HAS_AVX_1 - Success
-- Performing Test CXX_HAS_AVX2_1
-- Performing Test CXX_HAS_AVX2_1 - Success
-- SSE2 Found
-- SSE3 Found
-- AVX Found
-- AVX2 Found
-- Performing Test HAS_C11_ATOMICS
-- Performing Test HAS_C11_ATOMICS - Failed
-- Performing Test HAS_MSC_ATOMICS
-- Performing Test HAS_MSC_ATOMICS - Success
-- Performing Test HAS_GCC_ATOMICS
-- Performing Test HAS_GCC_ATOMICS - Failed
-- Atomics: using MSVC intrinsics
-- Looking for sys/types.h
-- Looking for sys/types.h - found
-- Looking for stdint.h
-- Looking for stdint.h - found
-- Looking for stddef.h
-- Looking for stddef.h - found
-- Check size of void*
-- Check size of void* - done
-- Checking for [mkl_intel - mkl_intel_thread - mkl_core - libiomp5md - pthread]
-- Library mkl_intel: not found
-- Checking for [mkl_intel - mkl_intel_thread - mkl_core - pthread]
-- Library mkl_intel: not found
-- Checking for [mkl_intel - mkl_sequential - mkl_core]
-- Library mkl_intel: not found
-- Checking for [mkl_intel - mkl_sequential - mkl_core]
-- Library mkl_intel: not found
-- Checking for [mkl_intel - mkl_intel_thread - mkl_core - libiomp5md - pthread]
-- Library mkl_intel: not found
-- Checking for [mkl_intel - mkl_intel_thread - mkl_core - pthread]
-- Library mkl_intel: not found
-- Checking for [mkl - guide - pthread - m]
-- Library mkl: not found
-- MKL library not found
-- Checking for [openblas]
-- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND
-- Checking for [openblas - pthread]
-- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND
-- Checking for [libopenblas]
-- Library libopenblas: BLAS_libopenblas_LIBRARY-NOTFOUND
-- Checking for [goto2 - gfortran]
-- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND
-- Checking for [goto2 - gfortran - pthread]
-- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND
-- Checking for [acml - gfortran]
-- Library acml: BLAS_acml_LIBRARY-NOTFOUND
-- Checking for [Accelerate]
-- Library Accelerate: BLAS_Accelerate_LIBRARY-NOTFOUND
-- Checking for [vecLib]
-- Library vecLib: BLAS_vecLib_LIBRARY-NOTFOUND
-- Checking for [ptf77blas - atlas - gfortran]
-- Library ptf77blas: BLAS_ptf77blas_LIBRARY-NOTFOUND
-- Checking for [blas]
-- Library blas: BLAS_blas_LIBRARY-NOTFOUND
-- Cannot find a library with BLAS API. Not using BLAS.
-- Checking for [mkl_intel - mkl_intel_thread - mkl_core - libiomp5md - pthread]
-- Library mkl_intel: not found
-- Checking for [mkl_intel - mkl_intel_thread - mkl_core - pthread]
-- Library mkl_intel: not found
-- Checking for [mkl_intel - mkl_sequential - mkl_core]
-- Library mkl_intel: not found
-- Checking for [mkl_intel - mkl_sequential - mkl_core]
-- Library mkl_intel: not found
-- Checking for [mkl_intel - mkl_intel_thread - mkl_core - libiomp5md - pthread]
-- Library mkl_intel: not found
-- Checking for [mkl_intel - mkl_intel_thread - mkl_core - pthread]
-- Library mkl_intel: not found
-- Checking for [mkl - guide - pthread - m]
-- Library mkl: not found
-- MKL library not found
-- Checking for [openblas]
-- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND
-- Checking for [openblas - pthread]
-- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND
-- Checking for [libopenblas]
-- Library libopenblas: BLAS_libopenblas_LIBRARY-NOTFOUND
-- Checking for [goto2 - gfortran]
-- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND
-- Checking for [goto2 - gfortran - pthread]
-- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND
-- Checking for [acml - gfortran]
-- Library acml: BLAS_acml_LIBRARY-NOTFOUND
-- Checking for [Accelerate]
-- Library Accelerate: BLAS_Accelerate_LIBRARY-NOTFOUND
-- Checking for [vecLib]
-- Library vecLib: BLAS_vecLib_LIBRARY-NOTFOUND
-- Checking for [ptf77blas - atlas - gfortran]
-- Library ptf77blas: BLAS_ptf77blas_LIBRARY-NOTFOUND
-- Checking for [blas]
-- Library blas: BLAS_blas_LIBRARY-NOTFOUND
-- Cannot find a library with BLAS API. Not using BLAS.
-- LAPACK requires BLAS
-- Cannot find a library with LAPACK API. Not using LAPACK.
-- Found CUDNN: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v9.0/include
-- Found cuDNN: v7.0.5 (include: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v9.0/include, library: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v9.0/lib/x64/cudnn.lib)
-- Could NOT find NNPACK (missing: NNPACK_INCLUDE_DIR NNPACK_LIBRARY CPUINFO_LIBRARY PTHREADPOOL_LIBRARY)
-- NNPACK not found. Compiling without nNPACK support
CMake Deprecation Warning at src/ATen/CMakeLists.txt:7 (CMAKE_POLICY):
The OLD behavior for policy CMP0026 will be removed from a future version
of CMake.
The cmake-policies(7) manual explains that the OLD behaviors of all
policies are deprecated and that a policy should be set to OLD only under
specific short-term circumstances. Projects should be ported to the NEW
behavior and not rely on setting a policy to OLD.
CMake Warning (dev) at src/ATen/CMakeLists.txt:39 (IF):
Policy CMP0054 is not set: Only interpret if() arguments as variables or
keywords when unquoted. Run "cmake --help-policy CMP0054" for policy
details. Use the cmake_policy command to set the policy and suppress this
warning.
Quoted variables like "MSVC" will no longer be dereferenced when the policy
is set to NEW. Since the policy is not set the OLD behavior will be used.
This warning is for project developers. Use -Wno-dev to suppress it.
-- Using python found in C:\Users\parkj\Anaconda3\envs\peter\python.exe
-- Performing Test C_HAS_THREAD
-- Performing Test C_HAS_THREAD - Success
CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
CUDA_cublas_LIBRARY (ADVANCED)
linked by target "ATen" in directory C:/Users/parkj/pytorch-scripts/pytorch/aten/src/ATen
CUDA_cublas_device_LIBRARY (ADVANCED)
linked by target "ATen" in directory C:/Users/parkj/pytorch-scripts/pytorch/aten/src/ATen
CUDA_curand_LIBRARY (ADVANCED)
linked by target "ATen" in directory C:/Users/parkj/pytorch-scripts/pytorch/aten/src/ATen
CUDA_cusparse_LIBRARY (ADVANCED)
linked by target "ATen" in directory C:/Users/parkj/pytorch-scripts/pytorch/aten/src/ATen
-- Configuring incomplete, errors occurred!
See also "C:/Users/parkj/pytorch-scripts/pytorch/torch/lib/build/ATen/CMakeFiles/CMakeOutput.log".
See also "C:/Users/parkj/pytorch-scripts/pytorch/torch/lib/build/ATen/CMakeFiles/CMakeError.log".
'msbuild' is not recognized as an internal or external command,
operable program or batch file.
-- Building for: Visual Studio 15 2017
-- The C compiler identification is MSVC 19.12.25830.2
-- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.12.25827/bin/Hostx86/x86/cl.exe
-- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.12.25827/bin/Hostx86/x86/cl.exe -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Configuring done
-- Generating done
CMake Warning:
Manually-specified variables were not used by the project:
ATEN_LIBRARIES
CMAKE_BUILD_TYPE
CMAKE_CXX_FLAGS
CUDA_NVCC_FLAGS
NO_CUDA
NO_NNPACK
THCS_LIBRARIES
THCUNN_LIBRARIES
THCUNN_SO_VERSION
THC_LIBRARIES
THC_SO_VERSION
THNN_LIBRARIES
THNN_SO_VERSION
THS_LIBRARIES
TH_INCLUDE_PATH
TH_LIBRARIES
TH_LIB_PATH
TH_SO_VERSION
Torch_FOUND
cwrap_files
-- Build files have been written to: C:/Users/parkj/pytorch-scripts/pytorch/torch/lib/build/nanopb
'msbuild' is not recognized as an internal or external command,
operable program or batch file.
-- Building for: Visual Studio 15 2017
-- The C compiler identification is MSVC 19.12.25830.2
-- The CXX compiler identification is MSVC 19.12.25830.2
-- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.12.25827/bin/Hostx86/x86/cl.exe
-- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.12.25827/bin/Hostx86/x86/cl.exe -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.12.25827/bin/Hostx86/x86/cl.exe
-- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.12.25827/bin/Hostx86/x86/cl.exe -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Configuring done
-- Generating done
CMake Warning:
Manually-specified variables were not used by the project:
ATEN_LIBRARIES
CMAKE_BUILD_TYPE
CUDA_NVCC_FLAGS
NO_CUDA
NO_NNPACK
THCS_LIBRARIES
THCUNN_LIBRARIES
THCUNN_SO_VERSION
THC_LIBRARIES
THC_SO_VERSION
THNN_LIBRARIES
THNN_SO_VERSION
THS_LIBRARIES
TH_INCLUDE_PATH
TH_LIB_PATH
TH_SO_VERSION
Torch_FOUND
cwrap_files
nanopb_BUILD_GENERATOR
-- Build files have been written to: C:/Users/parkj/pytorch-scripts/pytorch/torch/lib/build/libshm_windows
'msbuild' is not recognized as an internal or external command,
operable program or batch file.
The system cannot find the path specified.
File not found - *.*
0 File(s) copied
..\..\aten\src\THNN\generic\THNN.h
1 File(s) copied
..\..\aten\src\THCUNN\generic\THCUNN.h
1 File(s) copied
running build
running build_py
-- Building version 0.4.0a0+a3e9151
creating build
creating build\lib.win-amd64-3.6
creating build\lib.win-amd64-3.6\torch
copying torch\functional.py -> build\lib.win-amd64-3.6\torch
copying torch\random.py -> build\lib.win-amd64-3.6\torch
copying torch\serialization.py -> build\lib.win-amd64-3.6\torch
copying torch\storage.py -> build\lib.win-amd64-3.6\torch
copying torch\tensor.py -> build\lib.win-amd64-3.6\torch
copying torch\version.py -> build\lib.win-amd64-3.6\torch
copying torch\_six.py -> build\lib.win-amd64-3.6\torch
copying torch\_storage_docs.py -> build\lib.win-amd64-3.6\torch
copying torch\_tensor_docs.py -> build\lib.win-amd64-3.6\torch
copying torch\_tensor_str.py -> build\lib.win-amd64-3.6\torch
copying torch\_torch_docs.py -> build\lib.win-amd64-3.6\torch
copying torch\_utils.py -> build\lib.win-amd64-3.6\torch
copying torch\__init__.py -> build\lib.win-amd64-3.6\torch
creating build\lib.win-amd64-3.6\torch\autograd
copying torch\autograd\function.py -> build\lib.win-amd64-3.6\torch\autograd
copying torch\autograd\gradcheck.py -> build\lib.win-amd64-3.6\torch\autograd
copying torch\autograd\grad_mode.py -> build\lib.win-amd64-3.6\torch\autograd
copying torch\autograd\profiler.py -> build\lib.win-amd64-3.6\torch\autograd
copying torch\autograd\variable.py -> build\lib.win-amd64-3.6\torch\autograd
copying torch\autograd\__init__.py -> build\lib.win-amd64-3.6\torch\autograd
creating build\lib.win-amd64-3.6\torch\backends
copying torch\backends\__init__.py -> build\lib.win-amd64-3.6\torch\backends
creating build\lib.win-amd64-3.6\torch\contrib
copying torch\contrib\_graph_vis.py -> build\lib.win-amd64-3.6\torch\contrib
copying torch\contrib\__init__.py -> build\lib.win-amd64-3.6\torch\contrib
creating build\lib.win-amd64-3.6\torch\cuda
copying torch\cuda\comm.py -> build\lib.win-amd64-3.6\torch\cuda
copying torch\cuda\error.py -> build\lib.win-amd64-3.6\torch\cuda
copying torch\cuda\nccl.py -> build\lib.win-amd64-3.6\torch\cuda
copying torch\cuda\nvtx.py -> build\lib.win-amd64-3.6\torch\cuda
copying torch\cuda\profiler.py -> build\lib.win-amd64-3.6\torch\cuda
copying torch\cuda\random.py -> build\lib.win-amd64-3.6\torch\cuda
copying torch\cuda\sparse.py -> build\lib.win-amd64-3.6\torch\cuda
copying torch\cuda\streams.py -> build\lib.win-amd64-3.6\torch\cuda
copying torch\cuda\__init__.py -> build\lib.win-amd64-3.6\torch\cuda
creating build\lib.win-amd64-3.6\torch\distributed
copying torch\distributed\remote_types.py -> build\lib.win-amd64-3.6\torch\distributed
copying torch\distributed\__init__.py -> build\lib.win-amd64-3.6\torch\distributed
creating build\lib.win-amd64-3.6\torch\distributions
copying torch\distributions\bernoulli.py -> build\lib.win-amd64-3.6\torch\distributions
copying torch\distributions\beta.py -> build\lib.win-amd64-3.6\torch\distributions
copying torch\distributions\categorical.py -> build\lib.win-amd64-3.6\torch\distributions
copying torch\distributions\cauchy.py -> build\lib.win-amd64-3.6\torch\distributions
copying torch\distributions\chi2.py -> build\lib.win-amd64-3.6\torch\distributions
copying torch\distributions\constraints.py -> build\lib.win-amd64-3.6\torch\distributions
copying torch\distributions\dirichlet.py -> build\lib.win-amd64-3.6\torch\distributions
copying torch\distributions\distribution.py -> build\lib.win-amd64-3.6\torch\distributions
copying torch\distributions\exponential.py -> build\lib.win-amd64-3.6\torch\distributions
copying torch\distributions\gamma.py -> build\lib.win-amd64-3.6\torch\distributions
copying torch\distributions\laplace.py -> build\lib.win-amd64-3.6\torch\distributions
copying torch\distributions\normal.py -> build\lib.win-amd64-3.6\torch\distributions
copying torch\distributions\one_hot_categorical.py -> build\lib.win-amd64-3.6\torch\distributions
copying torch\distributions\pareto.py -> build\lib.win-amd64-3.6\torch\distributions
copying torch\distributions\uniform.py -> build\lib.win-amd64-3.6\torch\distributions
copying torch\distributions\utils.py -> build\lib.win-amd64-3.6\torch\distributions
copying torch\distributions\__init__.py -> build\lib.win-amd64-3.6\torch\distributions
creating build\lib.win-amd64-3.6\torch\for_onnx
copying torch\for_onnx\__init__.py -> build\lib.win-amd64-3.6\torch\for_onnx
creating build\lib.win-amd64-3.6\torch\jit
copying torch\jit\__init__.py -> build\lib.win-amd64-3.6\torch\jit
creating build\lib.win-amd64-3.6\torch\legacy
copying torch\legacy\__init__.py -> build\lib.win-amd64-3.6\torch\legacy
creating build\lib.win-amd64-3.6\torch\multiprocessing
copying torch\multiprocessing\pool.py -> build\lib.win-amd64-3.6\torch\multiprocessing
copying torch\multiprocessing\queue.py -> build\lib.win-amd64-3.6\torch\multiprocessing
copying torch\multiprocessing\reductions.py -> build\lib.win-amd64-3.6\torch\multiprocessing
copying torch\multiprocessing\__init__.py -> build\lib.win-amd64-3.6\torch\multiprocessing
creating build\lib.win-amd64-3.6\torch\nn
copying torch\nn\functional.py -> build\lib.win-amd64-3.6\torch\nn
copying torch\nn\init.py -> build\lib.win-amd64-3.6\torch\nn
copying torch\nn\parameter.py -> build\lib.win-amd64-3.6\torch\nn
copying torch\nn\__init__.py -> build\lib.win-amd64-3.6\torch\nn
creating build\lib.win-amd64-3.6\torch\onnx
copying torch\onnx\symbolic.py -> build\lib.win-amd64-3.6\torch\onnx
copying torch\onnx\__init__.py -> build\lib.win-amd64-3.6\torch\onnx
creating build\lib.win-amd64-3.6\torch\optim
copying torch\optim\adadelta.py -> build\lib.win-amd64-3.6\torch\optim
copying torch\optim\adagrad.py -> build\lib.win-amd64-3.6\torch\optim
copying torch\optim\adam.py -> build\lib.win-amd64-3.6\torch\optim
copying torch\optim\adamax.py -> build\lib.win-amd64-3.6\torch\optim
copying torch\optim\asgd.py -> build\lib.win-amd64-3.6\torch\optim
copying torch\optim\lbfgs.py -> build\lib.win-amd64-3.6\torch\optim
copying torch\optim\lr_scheduler.py -> build\lib.win-amd64-3.6\torch\optim
copying torch\optim\optimizer.py -> build\lib.win-amd64-3.6\torch\optim
copying torch\optim\rmsprop.py -> build\lib.win-amd64-3.6\torch\optim
copying torch\optim\rprop.py -> build\lib.win-amd64-3.6\torch\optim
copying torch\optim\sgd.py -> build\lib.win-amd64-3.6\torch\optim
copying torch\optim\sparse_adam.py -> build\lib.win-amd64-3.6\torch\optim
copying torch\optim\__init__.py -> build\lib.win-amd64-3.6\torch\optim
creating build\lib.win-amd64-3.6\torch\sparse
copying torch\sparse\__init__.py -> build\lib.win-amd64-3.6\torch\sparse
creating build\lib.win-amd64-3.6\torch\utils
copying torch\utils\dlpack.py -> build\lib.win-amd64-3.6\torch\utils
copying torch\utils\hooks.py -> build\lib.win-amd64-3.6\torch\utils
copying torch\utils\model_zoo.py -> build\lib.win-amd64-3.6\torch\utils
copying torch\utils\__init__.py -> build\lib.win-amd64-3.6\torch\utils
creating build\lib.win-amd64-3.6\torch\_thnn
copying torch\_thnn\utils.py -> build\lib.win-amd64-3.6\torch\_thnn
copying torch\_thnn\__init__.py -> build\lib.win-amd64-3.6\torch\_thnn
creating build\lib.win-amd64-3.6\torch\autograd\_functions
copying torch\autograd\_functions\basic_ops.py -> build\lib.win-amd64-3.6\torch\autograd\_functions
copying torch\autograd\_functions\tensor.py -> build\lib.win-amd64-3.6\torch\autograd\_functions
copying torch\autograd\_functions\utils.py -> build\lib.win-amd64-3.6\torch\autograd\_functions
copying torch\autograd\_functions\__init__.py -> build\lib.win-amd64-3.6\torch\autograd\_functions
creating build\lib.win-amd64-3.6\torch\backends\cudnn
copying torch\backends\cudnn\rnn.py -> build\lib.win-amd64-3.6\torch\backends\cudnn
copying torch\backends\cudnn\__init__.py -> build\lib.win-amd64-3.6\torch\backends\cudnn
creating build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Abs.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\AbsCriterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Add.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\AddConstant.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\BatchNormalization.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\BCECriterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Bilinear.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\CAddTable.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\CDivTable.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Clamp.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\ClassNLLCriterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\ClassSimplexCriterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\CMul.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\CMulTable.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Concat.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\ConcatTable.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Container.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Contiguous.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Copy.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Cosine.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\CosineDistance.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\CosineEmbeddingCriterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Criterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\CriterionTable.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\CrossEntropyCriterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\CSubTable.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\DepthConcat.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\DistKLDivCriterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\DotProduct.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Dropout.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\ELU.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Euclidean.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Exp.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\FlattenTable.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\GradientReversal.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\HardShrink.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\HardTanh.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\HingeEmbeddingCriterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Identity.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Index.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\JoinTable.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\L1Cost.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\L1HingeEmbeddingCriterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\L1Penalty.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\LeakyReLU.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Linear.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Log.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\LogSigmoid.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\LogSoftMax.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\LookupTable.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\MarginCriterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\MarginRankingCriterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\MaskedSelect.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Max.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Mean.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Min.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\MixtureTable.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\MM.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Module.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\MSECriterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Mul.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\MulConstant.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\MultiCriterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\MultiLabelMarginCriterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\MultiLabelSoftMarginCriterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\MultiMarginCriterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\MV.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Narrow.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\NarrowTable.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Normalize.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Padding.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\PairwiseDistance.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Parallel.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\ParallelCriterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\ParallelTable.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\PartialLinear.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Power.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\PReLU.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\ReLU.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\ReLU6.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Replicate.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Reshape.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\RReLU.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Select.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SelectTable.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Sequential.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Sigmoid.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SmoothL1Criterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SoftMarginCriterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SoftMax.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SoftMin.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SoftPlus.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SoftShrink.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SoftSign.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialAdaptiveMaxPooling.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialAveragePooling.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialBatchNormalization.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialClassNLLCriterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialContrastiveNormalization.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialConvolution.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialConvolutionLocal.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialConvolutionMap.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialCrossMapLRN.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialDilatedConvolution.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialDivisiveNormalization.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialDropout.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialFractionalMaxPooling.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialFullConvolution.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialFullConvolutionMap.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialLPPooling.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialMaxPooling.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialMaxUnpooling.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialReflectionPadding.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialReplicationPadding.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialSoftMax.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialSubSampling.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialSubtractiveNormalization.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialUpSamplingNearest.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialZeroPadding.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SplitTable.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Sqrt.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Square.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Squeeze.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Sum.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Tanh.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\TanhShrink.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\TemporalConvolution.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\TemporalMaxPooling.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\TemporalSubSampling.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Threshold.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Transpose.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Unsqueeze.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\utils.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\View.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\VolumetricAveragePooling.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\VolumetricBatchNormalization.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\VolumetricConvolution.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\VolumetricDropout.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\VolumetricFullConvolution.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\VolumetricMaxPooling.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\VolumetricMaxUnpooling.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\VolumetricReplicationPadding.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\WeightedEuclidean.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\WeightedMSECriterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\__init__.py -> build\lib.win-amd64-3.6\torch\legacy\nn
creating build\lib.win-amd64-3.6\torch\legacy\optim
copying torch\legacy\optim\adadelta.py -> build\lib.win-amd64-3.6\torch\legacy\optim
copying torch\legacy\optim\adagrad.py -> build\lib.win-amd64-3.6\torch\legacy\optim
copying torch\legacy\optim\adam.py -> build\lib.win-amd64-3.6\torch\legacy\optim
copying torch\legacy\optim\adamax.py -> build\lib.win-amd64-3.6\torch\legacy\optim
copying torch\legacy\optim\asgd.py -> build\lib.win-amd64-3.6\torch\legacy\optim
copying torch\legacy\optim\cg.py -> build\lib.win-amd64-3.6\torch\legacy\optim
copying torch\legacy\optim\lbfgs.py -> build\lib.win-amd64-3.6\torch\legacy\optim
copying torch\legacy\optim\nag.py -> build\lib.win-amd64-3.6\torch\legacy\optim
copying torch\legacy\optim\rmsprop.py -> build\lib.win-amd64-3.6\torch\legacy\optim
copying torch\legacy\optim\rprop.py -> build\lib.win-amd64-3.6\torch\legacy\optim
copying torch\legacy\optim\sgd.py -> build\lib.win-amd64-3.6\torch\legacy\optim
copying torch\legacy\optim\__init__.py -> build\lib.win-amd64-3.6\torch\legacy\optim
creating build\lib.win-amd64-3.6\torch\nn\backends
copying torch\nn\backends\backend.py -> build\lib.win-amd64-3.6\torch\nn\backends
copying torch\nn\backends\thnn.py -> build\lib.win-amd64-3.6\torch\nn\backends
copying torch\nn\backends\__init__.py -> build\lib.win-amd64-3.6\torch\nn\backends
creating build\lib.win-amd64-3.6\torch\nn\modules
copying torch\nn\modules\activation.py -> build\lib.win-amd64-3.6\torch\nn\modules
copying torch\nn\modules\batchnorm.py -> build\lib.win-amd64-3.6\torch\nn\modules
copying torch\nn\modules\container.py -> build\lib.win-amd64-3.6\torch\nn\modules
copying torch\nn\modules\conv.py -> build\lib.win-amd64-3.6\torch\nn\modules
copying torch\nn\modules\distance.py -> build\lib.win-amd64-3.6\torch\nn\modules
copying torch\nn\modules\dropout.py -> build\lib.win-amd64-3.6\torch\nn\modules
copying torch\nn\modules\instancenorm.py -> build\lib.win-amd64-3.6\torch\nn\modules
copying torch\nn\modules\linear.py -> build\lib.win-amd64-3.6\torch\nn\modules
copying torch\nn\modules\loss.py -> build\lib.win-amd64-3.6\torch\nn\modules
copying torch\nn\modules\module.py -> build\lib.win-amd64-3.6\torch\nn\modules
copying torch\nn\modules\normalization.py -> build\lib.win-amd64-3.6\torch\nn\modules
copying torch\nn\modules\padding.py -> build\lib.win-amd64-3.6\torch\nn\modules
copying torch\nn\modules\pixelshuffle.py -> build\lib.win-amd64-3.6\torch\nn\modules
copying torch\nn\modules\pooling.py -> build\lib.win-amd64-3.6\torch\nn\modules
copying torch\nn\modules\rnn.py -> build\lib.win-amd64-3.6\torch\nn\modules
copying torch\nn\modules\sparse.py -> build\lib.win-amd64-3.6\torch\nn\modules
copying torch\nn\modules\upsampling.py -> build\lib.win-amd64-3.6\torch\nn\modules
copying torch\nn\modules\utils.py -> build\lib.win-amd64-3.6\torch\nn\modules
copying torch\nn\modules\__init__.py -> build\lib.win-amd64-3.6\torch\nn\modules
creating build\lib.win-amd64-3.6\torch\nn\parallel
copying torch\nn\parallel\data_parallel.py -> build\lib.win-amd64-3.6\torch\nn\parallel
copying torch\nn\parallel\distributed.py -> build\lib.win-amd64-3.6\torch\nn\parallel
copying torch\nn\parallel\parallel_apply.py -> build\lib.win-amd64-3.6\torch\nn\parallel
copying torch\nn\parallel\replicate.py -> build\lib.win-amd64-3.6\torch\nn\parallel
copying torch\nn\parallel\scatter_gather.py -> build\lib.win-amd64-3.6\torch\nn\parallel
copying torch\nn\parallel\_functions.py -> build\lib.win-amd64-3.6\torch\nn\parallel
copying torch\nn\parallel\__init__.py -> build\lib.win-amd64-3.6\torch\nn\parallel
creating build\lib.win-amd64-3.6\torch\nn\utils
copying torch\nn\utils\clip_grad.py -> build\lib.win-amd64-3.6\torch\nn\utils
copying torch\nn\utils\convert_parameters.py -> build\lib.win-amd64-3.6\torch\nn\utils
copying torch\nn\utils\rnn.py -> build\lib.win-amd64-3.6\torch\nn\utils
copying torch\nn\utils\weight_norm.py -> build\lib.win-amd64-3.6\torch\nn\utils
copying torch\nn\utils\__init__.py -> build\lib.win-amd64-3.6\torch\nn\utils
creating build\lib.win-amd64-3.6\torch\nn\_functions
copying torch\nn\_functions\dropout.py -> build\lib.win-amd64-3.6\torch\nn\_functions
copying torch\nn\_functions\linear.py -> build\lib.win-amd64-3.6\torch\nn\_functions
copying torch\nn\_functions\loss.py -> build\lib.win-amd64-3.6\torch\nn\_functions
copying torch\nn\_functions\padding.py -> build\lib.win-amd64-3.6\torch\nn\_functions
copying torch\nn\_functions\rnn.py -> build\lib.win-amd64-3.6\torch\nn\_functions
copying torch\nn\_functions\vision.py -> build\lib.win-amd64-3.6\torch\nn\_functions
copying torch\nn\_functions\__init__.py -> build\lib.win-amd64-3.6\torch\nn\_functions
creating build\lib.win-amd64-3.6\torch\nn\_functions\thnn
copying torch\nn\_functions\thnn\auto.py -> build\lib.win-amd64-3.6\torch\nn\_functions\thnn
copying torch\nn\_functions\thnn\auto_double_backwards.py -> build\lib.win-amd64-3.6\torch\nn\_functions\thnn
copying torch\nn\_functions\thnn\auto_symbolic.py -> build\lib.win-amd64-3.6\torch\nn\_functions\thnn
copying torch\nn\_functions\thnn\normalization.py -> build\lib.win-amd64-3.6\torch\nn\_functions\thnn
copying torch\nn\_functions\thnn\rnnFusedPointwise.py -> build\lib.win-amd64-3.6\torch\nn\_functions\thnn
copying torch\nn\_functions\thnn\sparse.py -> build\lib.win-amd64-3.6\torch\nn\_functions\thnn
copying torch\nn\_functions\thnn\__init__.py -> build\lib.win-amd64-3.6\torch\nn\_functions\thnn
creating build\lib.win-amd64-3.6\torch\utils\backcompat
copying torch\utils\backcompat\__init__.py -> build\lib.win-amd64-3.6\torch\utils\backcompat
creating build\lib.win-amd64-3.6\torch\utils\data
copying torch\utils\data\dataloader.py -> build\lib.win-amd64-3.6\torch\utils\data
copying torch\utils\data\dataset.py -> build\lib.win-amd64-3.6\torch\utils\data
copying torch\utils\data\distributed.py -> build\lib.win-amd64-3.6\torch\utils\data
copying torch\utils\data\sampler.py -> build\lib.win-amd64-3.6\torch\utils\data
copying torch\utils\data\__init__.py -> build\lib.win-amd64-3.6\torch\utils\data
creating build\lib.win-amd64-3.6\torch\utils\ffi
copying torch\utils\ffi\__init__.py -> build\lib.win-amd64-3.6\torch\utils\ffi
creating build\lib.win-amd64-3.6\torch\utils\serialization
copying torch\utils\serialization\read_lua_file.py -> build\lib.win-amd64-3.6\torch\utils\serialization
copying torch\utils\serialization\__init__.py -> build\lib.win-amd64-3.6\torch\utils\serialization
creating build\lib.win-amd64-3.6\torch\utils\trainer
copying torch\utils\trainer\trainer.py -> build\lib.win-amd64-3.6\torch\utils\trainer
copying torch\utils\trainer\__init__.py -> build\lib.win-amd64-3.6\torch\utils\trainer
creating build\lib.win-amd64-3.6\torch\utils\trainer\plugins
copying torch\utils\trainer\plugins\accuracy.py -> build\lib.win-amd64-3.6\torch\utils\trainer\plugins
copying torch\utils\trainer\plugins\logger.py -> build\lib.win-amd64-3.6\torch\utils\trainer\plugins
copying torch\utils\trainer\plugins\loss.py -> build\lib.win-amd64-3.6\torch\utils\trainer\plugins
copying torch\utils\trainer\plugins\monitor.py -> build\lib.win-amd64-3.6\torch\utils\trainer\plugins
copying torch\utils\trainer\plugins\plugin.py -> build\lib.win-amd64-3.6\torch\utils\trainer\plugins
copying torch\utils\trainer\plugins\progress.py -> build\lib.win-amd64-3.6\torch\utils\trainer\plugins
copying torch\utils\trainer\plugins\time.py -> build\lib.win-amd64-3.6\torch\utils\trainer\plugins
copying torch\utils\trainer\plugins\__init__.py -> build\lib.win-amd64-3.6\torch\utils\trainer\plugins
creating build\lib.win-amd64-3.6\torch\lib
copying torch\lib\THCUNN.h -> build\lib.win-amd64-3.6\torch\lib
copying torch\lib\THNN.h -> build\lib.win-amd64-3.6\torch\lib
running build_ext
-- Building with NumPy bindings
-- Detected cuDNN at C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v9.0\lib/x64, C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v9.0\include
-- Detected CUDA at C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v9.0
-- Not using NCCL
-- Building without distributed package
error: [Errno 2] No such file or directory: 'torch/lib/tmp_install/share/ATen/Declarations.yaml'
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment