Skip to content

Instantly share code, notes, and snippets.

@magic282
Last active December 5, 2017 11:39
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save magic282/773a0c43007f37d769b8f51ba8ba98d7 to your computer and use it in GitHub Desktop.
Save magic282/773a0c43007f37d769b8f51ba8ba98d7 to your computer and use it in GitHub Desktop.
peterjc123/pytorch-scripts
D:\pytorch\pytorch-scripts>cuda9
D:\pytorch\pytorch-scripts>REM @echo off
D:\pytorch\pytorch-scripts>IF NOT EXIST "setup.py" IF NOT EXIST "pytorch" (
call internal\clone.bat
IF ERRORLEVEL 1 goto eof
)
D:\pytorch\pytorch-scripts>REM @echo off
D:\pytorch\pytorch-scripts>git clone --recursive https://github.com/pytorch/pytorch
Cloning into 'pytorch'...
remote: Counting objects: 46135, done.
remote: Compressing objects: 100% (28/28), done.
remote: Total 46135 (delta 13), reused 11 (delta 4), pack-reused 46103
Receiving objects: 100% (46135/46135), 18.17 MiB | 16.42 MiB/s, done.
Resolving deltas: 100% (34909/34909), done.
Submodule 'torch/lib/gloo' (https://github.com/facebookincubator/gloo) registered for path 'torch/lib/gloo'
Submodule 'torch/lib/nanopb' (https://github.com/nanopb/nanopb.git) registered for path 'torch/lib/nanopb'
Submodule 'torch/lib/pybind11' (https://github.com/pybind/pybind11) registered for path 'torch/lib/pybind11'
Cloning into 'D:/pytorch/pytorch-scripts/pytorch/torch/lib/gloo'...
remote: Counting objects: 2000, done.
remote: Total 2000 (delta 0), reused 0 (delta 0), pack-reused 2000
Receiving objects: 100% (2000/2000), 583.43 KiB | 15.35 MiB/s, done.
Resolving deltas: 100% (1505/1505), done.
Cloning into 'D:/pytorch/pytorch-scripts/pytorch/torch/lib/nanopb'...
remote: Counting objects: 4388, done.
remote: Compressing objects: 100% (4/4), done.
remote: Total 4388 (delta 0), reused 1 (delta 0), pack-reused 4384
Receiving objects: 100% (4388/4388), 1015.03 KiB | 13.01 MiB/s, done.
Resolving deltas: 100% (2873/2873), done.
Cloning into 'D:/pytorch/pytorch-scripts/pytorch/torch/lib/pybind11'...
remote: Counting objects: 9523, done.
remote: Compressing objects: 100% (25/25), done.
remote: Total 9523 (delta 8), reused 17 (delta 8), pack-reused 9487
Receiving objects: 100% (9523/9523), 3.37 MiB | 17.32 MiB/s, done.
Resolving deltas: 100% (6423/6423), done.
Submodule path 'torch/lib/gloo': checked out '05ad98aeb66fabc7c8126e6068d4a70134d4b80d'
Submodule 'third-party/googletest' (https://github.com/google/googletest.git) registered for path 'torch/lib/gloo/third-party/googletest'
Cloning into 'D:/pytorch/pytorch-scripts/pytorch/torch/lib/gloo/third-party/googletest'...
remote: Counting objects: 9186, done.
remote: Total 9186 (delta 0), reused 1 (delta 0), pack-reused 9185
Receiving objects: 100% (9186/9186), 2.81 MiB | 15.78 MiB/s, done.
Resolving deltas: 100% (6793/6793), done.
Submodule path 'torch/lib/gloo/third-party/googletest': checked out 'ec44c6c1675c25b9827aacd08c02433cccde7780'
Submodule path 'torch/lib/nanopb': checked out '14efb1a47a496652ab08b1ebcefb0ea24ae4a5e4'
Submodule path 'torch/lib/pybind11': checked out '9f6a636e547fc70a02fa48436449aad67080698f'
Submodule 'tools/clang' (https://github.com/wjakob/clang-cindex-python3) registered for path 'torch/lib/pybind11/tools/clang'
Cloning into 'D:/pytorch/pytorch-scripts/pytorch/torch/lib/pybind11/tools/clang'...
remote: Counting objects: 353, done.
remote: Total 353 (delta 0), reused 0 (delta 0), pack-reused 353
Receiving objects: 100% (353/353), 119.74 KiB | 9.98 MiB/s, done.
Resolving deltas: 100% (149/149), done.
Submodule path 'torch/lib/pybind11/tools/clang': checked out '6a00cbc4a9b8e68b71caf7f774b3f9c753ae84d5'
D:\pytorch\pytorch-scripts>cd pytorch
D:\pytorch\pytorch-scripts\pytorch>xcopy /Y aten\src\ATen\common_with_cwrap.py tools\shared\cwrap_common.py
aten\src\ATen\common_with_cwrap.py
1 File(s) copied
D:\pytorch\pytorch-scripts\pytorch>REM Before the merge of PR 3757
D:\pytorch\pytorch-scripts\pytorch>mkdir torch\lib\build\ATen\src\ATen
D:\pytorch\pytorch-scripts\pytorch>cd torch\lib\build\ATen\src\ATen
D:\pytorch\pytorch-scripts\pytorch\torch\lib\build\ATen\src\ATen>mkdir ATen
D:\pytorch\pytorch-scripts\pytorch\torch\lib\build\ATen\src\ATen>python ../../../../../../aten/src/ATen/gen.py -s ../../../../../../aten/src/ATen ../../../../../../aten/src/ATen
/Declarations.cwrap ../../../../../../aten/src/THNN/generic/THNN.h ../../../../../../aten/src/THCUNN/generic/THCUNN.h ../../../../../../aten/src/ATen/nn.yaml ../../../../../../
aten/src/ATen/native/native_functions.yaml
ATen Excluded: {'bernoulli_', 'bernoulli'}
D:\pytorch\pytorch-scripts\pytorch\torch\lib\build\ATen\src\ATen>cd ../../../../../../..
D:\pytorch\pytorch-scripts>call internal\check_deps.bat
D:\pytorch\pytorch-scripts>REM @echo off
D:\pytorch\pytorch-scripts>REM Check for necessary components
D:\pytorch\pytorch-scripts>IF NOT "AMD64" == "AMD64" (
echo You should use 64 bits Windows to build and run PyTorch
exit /b 1
)
D:\pytorch\pytorch-scripts>where /q cmake.exe
D:\pytorch\pytorch-scripts>IF ERRORLEVEL 1 (
echo CMake is required to compile PyTorch on Windows
exit /b 1
)
D:\pytorch\pytorch-scripts>IF NOT EXIST "C:\Program Files (x86)\Microsoft Visual Studio\Installer\vswhere.exe" (
echo Visual Studio 2017 C++ BuildTools is required to compile PyTorch on Windows
exit /b 1
)
D:\pytorch\pytorch-scripts>for /F "usebackq tokens=*" %i in (`"C:\Program Files (x86)\Microsoft Visual Studio\Installer\vswhere.exe" -legacy -version [15,16) -property installation
Path`) do (if exist "%i" if exist "%i\VC\Auxiliary\Build\vcvarsall.bat" (
set VS15INSTALLDIR=%i
set VS15VCVARSALL=%i\VC\Auxiliary\Build\vcvarsall.bat
goto vswhere
) )
D:\pytorch\pytorch-scripts>(if exist "C:\Program Files (x86)\Microsoft Visual Studio\2017\Community" if exist "C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Auxi
liary\Build\vcvarsall.bat" (
set VS15INSTALLDIR=C:\Program Files (x86)\Microsoft Visual Studio\2017\Community
set VS15VCVARSALL=C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Auxiliary\Build\vcvarsall.bat
goto vswhere
) )
D:\pytorch\pytorch-scripts>IF "C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Auxiliary\Build\vcvarsall.bat" == "" (
echo Visual Studio 2017 C++ BuildTools is required to compile PyTorch on Windows
exit /b 1
)
D:\pytorch\pytorch-scripts>IF NOT "C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Auxiliary\Build\vcvarsall.bat" == "" IF NOT "C:\Program Files (x86)\Microsoft Vi
sual Studio 14.0\Common7\Tools\" == "" (set DISTUTILS_USE_SDK=1 )
D:\pytorch\pytorch-scripts>where /q python.exe
D:\pytorch\pytorch-scripts>IF ERRORLEVEL 1 (
echo Python x64 3.5 or up is required to compile PyTorch on Windows
exit /b 1
)
D:\pytorch\pytorch-scripts>for /F "usebackq delims=" %i in (`python -c "import sys; print('{0[0]}{0[1]}'.format(sys.version_info))"`) do (set /a PYVER=%i )
D:\pytorch\pytorch-scripts>(set /a PYVER=36 )
D:\pytorch\pytorch-scripts>if 36 LSS 35 (
echo Python x64 3.5 or up is required to compile PyTorch on Windows
echo Maybe you can create a virual environment if you have conda installed:
echo > conda create -n test python=3.6 pyyaml mkl numpy
echo > activate test
exit /b 1
)
D:\pytorch\pytorch-scripts>for /F "usebackq delims=" %i in (`python -c "import struct;print( 8 * struct.calcsize('P'))"`) do (set /a PYSIZE=%i )
D:\pytorch\pytorch-scripts>(set /a PYSIZE=64 )
D:\pytorch\pytorch-scripts>if 64 NEQ 64 (
echo Python x64 3.5 or up is required to compile PyTorch on Windows
exit /b 1
)
D:\pytorch\pytorch-scripts>IF ERRORLEVEL 1 goto eof
D:\pytorch\pytorch-scripts>REM Check for optional components
D:\pytorch\pytorch-scripts>set NO_CUDA=
D:\pytorch\pytorch-scripts>set CMAKE_GENERATOR=Visual Studio 15 2017 Win64
not was unexpected at this time.
D:\pytorch\pytorch-scripts> echo NVTX (Visual Studio Extension for CUDA) not installed, disabling CUDA
D:\pytorch\pytorch-scripts>
**********************************************************************
** Visual Studio 2017 Developer Command Prompt v15.4.3
** Copyright (c) 2017 Microsoft Corporation
**********************************************************************
[vcvarsall.bat] Environment initialized for: 'x86_x64'
C:\Users\v-qizhou\source>d:
D:\>cd pytorch
D:\pytorch>git clone --recursive https://github.com/pytorch/pytorch
Cloning into 'pytorch'...
remote: Counting objects: 46135, done.
remote: Compressing objects: 100% (28/28), done.
remote: Total 46135 (delta 13), reused 11 (delta 4), pack-reused 46103
Receiving objects: 100% (46135/46135), 18.17 MiB | 20.54 MiB/s, done.
Resolving deltas: 100% (34909/34909), done.
Submodule 'torch/lib/gloo' (https://github.com/facebookincubator/gloo) registered for path 'torch/lib/gloo'
Submodule 'torch/lib/nanopb' (https://github.com/nanopb/nanopb.git) registered for path 'torch/lib/nanopb'
Submodule 'torch/lib/pybind11' (https://github.com/pybind/pybind11) registered for path 'torch/lib/pybind11'
Cloning into 'D:/pytorch/pytorch/torch/lib/gloo'...
remote: Counting objects: 2000, done.
remote: Total 2000 (delta 0), reused 0 (delta 0), pack-reused 2000
Receiving objects: 100% (2000/2000), 583.43 KiB | 15.77 MiB/s, done.
Resolving deltas: 100% (1505/1505), done.
Cloning into 'D:/pytorch/pytorch/torch/lib/nanopb'...
remote: Counting objects: 4388, done.
remote: Compressing objects: 100% (4/4), done.
remote: Total 4388 (delta 0), reused 1 (delta 0), pack-reused 4384
Receiving objects: 100% (4388/4388), 1015.03 KiB | 12.38 MiB/s, done.
Resolving deltas: 100% (2873/2873), done.
Cloning into 'D:/pytorch/pytorch/torch/lib/pybind11'...
remote: Counting objects: 9523, done.
remote: Compressing objects: 100% (25/25), done.
remote: Total 9523 (delta 8), reused 17 (delta 8), pack-reused 9487
Receiving objects: 100% (9523/9523), 3.37 MiB | 16.98 MiB/s, done.
Resolving deltas: 100% (6423/6423), done.
Submodule path 'torch/lib/gloo': checked out '05ad98aeb66fabc7c8126e6068d4a70134d4b80d'
Submodule 'third-party/googletest' (https://github.com/google/googletest.git) registered for path 'torch/lib/gloo/third-party/googletest'
Cloning into 'D:/pytorch/pytorch/torch/lib/gloo/third-party/googletest'...
remote: Counting objects: 9186, done.
remote: Total 9186 (delta 0), reused 1 (delta 0), pack-reused 9185
Receiving objects: 100% (9186/9186), 2.81 MiB | 15.78 MiB/s, done.
Resolving deltas: 100% (6793/6793), done.
Submodule path 'torch/lib/gloo/third-party/googletest': checked out 'ec44c6c1675c25b9827aacd08c02433cccde7780'
Submodule path 'torch/lib/nanopb': checked out '14efb1a47a496652ab08b1ebcefb0ea24ae4a5e4'
Submodule path 'torch/lib/pybind11': checked out '9f6a636e547fc70a02fa48436449aad67080698f'
Submodule 'tools/clang' (https://github.com/wjakob/clang-cindex-python3) registered for path 'torch/lib/pybind11/tools/clang'
Cloning into 'D:/pytorch/pytorch/torch/lib/pybind11/tools/clang'...
remote: Counting objects: 353, done.
remote: Total 353 (delta 0), reused 0 (delta 0), pack-reused 353
Receiving objects: 100% (353/353), 119.74 KiB | 9.98 MiB/s, done.
Resolving deltas: 100% (149/149), done.
Submodule path 'torch/lib/pybind11/tools/clang': checked out '6a00cbc4a9b8e68b71caf7f774b3f9c753ae84d5'
D:\pytorch>cd pytorch
D:\pytorch\pytorch>xcopy /Y aten\src\ATen\common_with_cwrap.py tools\shared\cwrap_common.py
aten\src\ATen\common_with_cwrap.py
1 File(s) copied
D:\pytorch\pytorch>mkdir torch\lib\build\ATen\src\ATen
D:\pytorch\pytorch>cd torch\lib\build\ATen\src\ATen
D:\pytorch\pytorch\torch\lib\build\ATen\src\ATen>mkdir ATen
D:\pytorch\pytorch\torch\lib\build\ATen\src\ATen>python ../../../../../../aten/src/ATen/gen.py -s ../../../../../../aten/src/ATen ../../../../../../aten/src/
ATen/Declarations.cwrap ../../../../../../aten/src/THNN/generic/THNN.h ../../../../../../aten/src/THCUNN/generic/THCUNN.h ../../../../../../aten/src/ATen/nn.
yaml ../../../../../../aten/src/ATen/native/native_functions.yaml
ATen Excluded: {'bernoulli_', 'bernoulli'}
D:\pytorch\pytorch\torch\lib\build\ATen\src\ATen>cd ../../../../../..
D:\pytorch\pytorch>set CMAKE_GENERATOR=Visual Studio 15 2017 Win64
D:\pytorch\pytorch>set DISTUTILS_USE_SDK=1
D:\pytorch\pytorch>python setup.py install
running install
running build_deps
A subdirectory or file build\ATen already exists.
-- Building for: Visual Studio 15 2017
-- The C compiler identification is MSVC 19.11.25547.0
-- The CXX compiler identification is MSVC 19.11.25547.0
-- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.11.25503/bin/Hostx86/x86/cl.exe
-- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.11.25503/bin/Hostx86/x86/cl.exe -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.11.25503/bin/Hostx86/x86/cl.exe
-- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.11.25503/bin/Hostx86/x86/cl.exe -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
CMake Warning (dev) at cmake/FindCUDA/FindCUDA.cmake:494 (if):
Policy CMP0054 is not set: Only interpret if() arguments as variables or
keywords when unquoted. Run "cmake --help-policy CMP0054" for policy
details. Use the cmake_policy command to set the policy and suppress this
warning.
Quoted variables like "MSVC" will no longer be dereferenced when the policy
is set to NEW. Since the policy is not set the OLD behavior will be used.
Call Stack (most recent call first):
CMakeLists.txt:42 (FIND_PACKAGE)
This warning is for project developers. Use -Wno-dev to suppress it.
-- Found CUDA: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v9.0 (found suitable version "9.0", minimum required is "5.5")
-- Automatic GPU detection failed. Building for common architectures.
-- Autodetected CUDA architecture(s): 3.0;3.5;5.0;5.2;6.0;6.1;6.1+PTX
-- Found CUDA with FP16 support, compiling with torch.CudaHalfTensor
-- Removing -DNDEBUG from compile flags
CMake Warning (dev) at C:/Program Files/CMake/share/cmake-3.9/Modules/FindOpenMP.cmake:212 (if):
Policy CMP0054 is not set: Only interpret if() arguments as variables or
keywords when unquoted. Run "cmake --help-policy CMP0054" for policy
details. Use the cmake_policy command to set the policy and suppress this
warning.
Quoted variables like "MSVC" will no longer be dereferenced when the policy
is set to NEW. Since the policy is not set the OLD behavior will be used.
Call Stack (most recent call first):
C:/Program Files/CMake/share/cmake-3.9/Modules/FindOpenMP.cmake:324 (_OPENMP_GET_FLAGS)
CMakeLists.txt:127 (FIND_PACKAGE)
This warning is for project developers. Use -Wno-dev to suppress it.
-- Found OpenMP_C: -openmp (found version "2.0")
-- Found OpenMP_CXX: -openmp (found version "2.0")
-- Compiling with OpenMP support
-- MAGMA not found. Compiling without MAGMA support
-- Found CUDNN: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v9.0/include
-- Found cuDNN: v7.0.3 (include: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v9.0/include, library: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v
9.0/lib/x64/cudnn.lib)
-- Could not find hardware support for NEON on this machine.
-- No OMAP3 processor on this machine.
-- No OMAP4 processor on this machine.
-- Looking for cpuid.h
-- Looking for cpuid.h - not found
-- Performing Test NO_GCC_EBX_FPIC_BUG
-- Performing Test NO_GCC_EBX_FPIC_BUG - Failed
-- Performing Test C_HAS_SSE1_1
-- Performing Test C_HAS_SSE1_1 - Success
-- Performing Test C_HAS_SSE2_1
-- Performing Test C_HAS_SSE2_1 - Success
-- Performing Test C_HAS_SSE3_1
-- Performing Test C_HAS_SSE3_1 - Success
-- Performing Test C_HAS_SSE4_1_1
-- Performing Test C_HAS_SSE4_1_1 - Success
-- Performing Test C_HAS_SSE4_2_1
-- Performing Test C_HAS_SSE4_2_1 - Success
-- Performing Test C_HAS_AVX_1
-- Performing Test C_HAS_AVX_1 - Success
-- Performing Test C_HAS_AVX2_1
-- Performing Test C_HAS_AVX2_1 - Failed
-- Performing Test C_HAS_AVX2_2
-- Performing Test C_HAS_AVX2_2 - Failed
-- Performing Test C_HAS_AVX2_3
-- Performing Test C_HAS_AVX2_3 - Failed
-- Performing Test CXX_HAS_SSE1_1
-- Performing Test CXX_HAS_SSE1_1 - Success
-- Performing Test CXX_HAS_SSE2_1
-- Performing Test CXX_HAS_SSE2_1 - Success
-- Performing Test CXX_HAS_SSE3_1
-- Performing Test CXX_HAS_SSE3_1 - Success
-- Performing Test CXX_HAS_SSE4_1_1
-- Performing Test CXX_HAS_SSE4_1_1 - Success
-- Performing Test CXX_HAS_SSE4_2_1
-- Performing Test CXX_HAS_SSE4_2_1 - Success
-- Performing Test CXX_HAS_AVX_1
-- Performing Test CXX_HAS_AVX_1 - Success
-- Performing Test CXX_HAS_AVX2_1
-- Performing Test CXX_HAS_AVX2_1 - Failed
-- Performing Test CXX_HAS_AVX2_2
-- Performing Test CXX_HAS_AVX2_2 - Failed
-- Performing Test CXX_HAS_AVX2_3
-- Performing Test CXX_HAS_AVX2_3 - Failed
-- SSE2 Found
-- SSE3 Found
-- AVX Found
-- Performing Test HAS_C11_ATOMICS
-- Performing Test HAS_C11_ATOMICS - Failed
-- Performing Test HAS_MSC_ATOMICS
-- Performing Test HAS_MSC_ATOMICS - Success
-- Performing Test HAS_GCC_ATOMICS
-- Performing Test HAS_GCC_ATOMICS - Failed
-- Atomics: using MSVC intrinsics
-- Looking for sys/types.h
-- Looking for sys/types.h - found
-- Looking for stdint.h
-- Looking for stdint.h - found
-- Looking for stddef.h
-- Looking for stddef.h - found
-- Check size of void*
-- Check size of void* - done
-- Checking for [mkl_intel - mkl_intel_thread - mkl_core - libiomp5md - pthread]
-- Library mkl_intel: not found
-- Checking for [mkl_intel - mkl_intel_thread - mkl_core - pthread]
-- Library mkl_intel: not found
-- Checking for [mkl_intel - mkl_sequential - mkl_core]
-- Library mkl_intel: not found
-- Checking for [mkl_intel - mkl_sequential - mkl_core]
-- Library mkl_intel: not found
-- Checking for [mkl_intel - mkl_intel_thread - mkl_core - libiomp5md - pthread]
-- Library mkl_intel: not found
-- Checking for [mkl_intel - mkl_intel_thread - mkl_core - pthread]
-- Library mkl_intel: not found
-- Checking for [mkl - guide - pthread - m]
-- Library mkl: not found
-- MKL library not found
-- Checking for [openblas]
-- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND
-- Checking for [openblas - pthread]
-- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND
-- Checking for [libopenblas]
-- Library libopenblas: BLAS_libopenblas_LIBRARY-NOTFOUND
-- Checking for [goto2 - gfortran]
-- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND
-- Checking for [goto2 - gfortran - pthread]
-- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND
-- Checking for [acml - gfortran]
-- Library acml: BLAS_acml_LIBRARY-NOTFOUND
-- Checking for [Accelerate]
-- Library Accelerate: BLAS_Accelerate_LIBRARY-NOTFOUND
-- Checking for [vecLib]
-- Library vecLib: BLAS_vecLib_LIBRARY-NOTFOUND
-- Checking for [ptf77blas - atlas - gfortran]
-- Library ptf77blas: BLAS_ptf77blas_LIBRARY-NOTFOUND
-- Checking for [blas]
-- Library blas: BLAS_blas_LIBRARY-NOTFOUND
-- Cannot find a library with BLAS API. Not using BLAS.
-- Checking for [mkl_intel - mkl_intel_thread - mkl_core - libiomp5md - pthread]
-- Library mkl_intel: not found
-- Checking for [mkl_intel - mkl_intel_thread - mkl_core - pthread]
-- Library mkl_intel: not found
-- Checking for [mkl_intel - mkl_sequential - mkl_core]
-- Library mkl_intel: not found
-- Checking for [mkl_intel - mkl_sequential - mkl_core]
-- Library mkl_intel: not found
-- Checking for [mkl_intel - mkl_intel_thread - mkl_core - libiomp5md - pthread]
-- Library mkl_intel: not found
-- Checking for [mkl_intel - mkl_intel_thread - mkl_core - pthread]
-- Library mkl_intel: not found
-- Checking for [mkl - guide - pthread - m]
-- Library mkl: not found
-- MKL library not found
-- Checking for [openblas]
-- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND
-- Checking for [openblas - pthread]
-- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND
-- Checking for [libopenblas]
-- Library libopenblas: BLAS_libopenblas_LIBRARY-NOTFOUND
-- Checking for [goto2 - gfortran]
-- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND
-- Checking for [goto2 - gfortran - pthread]
-- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND
-- Checking for [acml - gfortran]
-- Library acml: BLAS_acml_LIBRARY-NOTFOUND
-- Checking for [Accelerate]
-- Library Accelerate: BLAS_Accelerate_LIBRARY-NOTFOUND
-- Checking for [vecLib]
-- Library vecLib: BLAS_vecLib_LIBRARY-NOTFOUND
-- Checking for [ptf77blas - atlas - gfortran]
-- Library ptf77blas: BLAS_ptf77blas_LIBRARY-NOTFOUND
-- Checking for [blas]
-- Library blas: BLAS_blas_LIBRARY-NOTFOUND
-- Cannot find a library with BLAS API. Not using BLAS.
-- LAPACK requires BLAS
-- Cannot find a library with LAPACK API. Not using LAPACK.
CMake Deprecation Warning at src/ATen/CMakeLists.txt:7 (CMAKE_POLICY):
The OLD behavior for policy CMP0026 will be removed from a future version
of CMake.
The cmake-policies(7) manual explains that the OLD behaviors of all
policies are deprecated and that a policy should be set to OLD only under
specific short-term circumstances. Projects should be ported to the NEW
behavior and not rely on setting a policy to OLD.
-- Using python found in C:\Anaconda3\python.exe
-- Performing Test C_HAS_THREAD
-- Performing Test C_HAS_THREAD - Success
-- Configuring done
-- Generating done
-- Build files have been written to: D:/pytorch/pytorch/torch/lib/build/ATen
Microsoft (R) Build Engine version 15.4.8.50001 for .NET Framework
Copyright (C) Microsoft Corporation. All rights reserved.
Build started 12/5/2017 3:37:51 AM.
Project "D:\pytorch\pytorch\torch\lib\build\ATen\INSTALL.vcxproj" on node 1 (default targets).
Project "D:\pytorch\pytorch\torch\lib\build\ATen\INSTALL.vcxproj" (1) is building "D:\pytorch\pytorch\torch\lib\build\ATen\ZERO_CHECK.vcxproj" (2) on node 1 (d
efault targets).
PrepareForBuild:
Creating directory "Win32\Release\ZERO_CHECK\".
Creating directory "D:\pytorch\pytorch\torch\lib\build\ATen\Release\".
Creating directory "Win32\Release\ZERO_CHECK\ZERO_CHECK.tlog\".
InitializeBuildStatus:
Creating "Win32\Release\ZERO_CHECK\ZERO_CHECK.tlog\unsuccessfulbuild" because "AlwaysCreate" was specified.
CustomBuild:
Checking Build System
CMake does not need to re-run because D:/pytorch/pytorch/torch/lib/build/ATen/CMakeFiles/generate.stamp is up-to-date.
CMake does not need to re-run because D:/pytorch/pytorch/torch/lib/build/ATen/src/TH/CMakeFiles/generate.stamp is up-to-date.
CMake does not need to re-run because D:/pytorch/pytorch/torch/lib/build/ATen/src/THNN/CMakeFiles/generate.stamp is up-to-date.
CMake does not need to re-run because D:/pytorch/pytorch/torch/lib/build/ATen/src/THS/CMakeFiles/generate.stamp is up-to-date.
CMake does not need to re-run because D:/pytorch/pytorch/torch/lib/build/ATen/src/THC/CMakeFiles/generate.stamp is up-to-date.
CMake does not need to re-run because D:/pytorch/pytorch/torch/lib/build/ATen/src/THCUNN/CMakeFiles/generate.stamp is up-to-date.
CMake does not need to re-run because D:/pytorch/pytorch/torch/lib/build/ATen/src/THCS/CMakeFiles/generate.stamp is up-to-date.
CMake does not need to re-run because D:/pytorch/pytorch/torch/lib/build/ATen/src/ATen/CMakeFiles/generate.stamp is up-to-date.
CMake does not need to re-run because D:/pytorch/pytorch/torch/lib/build/ATen/src/ATen/test/CMakeFiles/generate.stamp is up-to-date.
CMake does not need to re-run because D:/pytorch/pytorch/torch/lib/build/ATen/contrib/data/CMakeFiles/generate.stamp is up-to-date.
CMake does not need to re-run because D:/pytorch/pytorch/torch/lib/build/ATen/contrib/meter/CMakeFiles/generate.stamp is up-to-date.
FinalizeBuildStatus:
Deleting file "Win32\Release\ZERO_CHECK\ZERO_CHECK.tlog\unsuccessfulbuild".
Touching "Win32\Release\ZERO_CHECK\ZERO_CHECK.tlog\ZERO_CHECK.lastbuildstate".
Done Building Project "D:\pytorch\pytorch\torch\lib\build\ATen\ZERO_CHECK.vcxproj" (default targets).
Project "D:\pytorch\pytorch\torch\lib\build\ATen\INSTALL.vcxproj" (1) is building "D:\pytorch\pytorch\torch\lib\build\ATen\ALL_BUILD.vcxproj" (3) on node 1 (de
fault targets).
Project "D:\pytorch\pytorch\torch\lib\build\ATen\ALL_BUILD.vcxproj" (3) is building "D:\pytorch\pytorch\torch\lib\build\ATen\src\ATen\ATen.vcxproj" (4) on node
1 (default targets).
Project "D:\pytorch\pytorch\torch\lib\build\ATen\src\ATen\ATen.vcxproj" (4) is building "D:\pytorch\pytorch\torch\lib\build\ATen\src\ATen\aten_files_are_genera
ted.vcxproj" (5) on node 1 (default targets).
PrepareForBuild:
Creating directory "Win32\Release\aten_files_are_generated\".
Creating directory "D:\pytorch\pytorch\torch\lib\build\ATen\src\ATen\Release\".
Creating directory "Win32\Release\aten_files_are_generated\aten_fil.DA4C4DD4.tlog\".
InitializeBuildStatus:
Creating "Win32\Release\aten_files_are_generated\aten_fil.DA4C4DD4.tlog\unsuccessfulbuild" because "AlwaysCreate" was specified.
CustomBuild:
Building Custom Rule D:/pytorch/pytorch/aten/src/ATen/CMakeLists.txt
CMake does not need to re-run because D:/pytorch/pytorch/torch/lib/build/ATen/src/ATen/CMakeFiles/generate.stamp is up-to-date.
Generating ATen/CPUByteStorage.cpp, ATen/CPUByteStorage.h, ATen/CPUByteTensor.cpp, ATen/CPUByteTensor.h, ATen/CPUByteType.cpp, ATen/CPUByteType.h, ATen/CPUCh
arStorage.cpp, ATen/CPUCharStorage.h, ATen/CPUCharTensor.cpp, ATen/CPUCharTensor.h, ATen/CPUCharType.cpp, ATen/CPUCharType.h, ATen/CPUDoubleStorage.cpp, ATen
/CPUDoubleStorage.h, ATen/CPUDoubleTensor.cpp, ATen/CPUDoubleTensor.h, ATen/CPUDoubleType.cpp, ATen/CPUDoubleType.h, ATen/CPUFloatStorage.cpp, ATen/CPUFloatS
torage.h, ATen/CPUFloatTensor.cpp, ATen/CPUFloatTensor.h, ATen/CPUFloatType.cpp, ATen/CPUFloatType.h, ATen/CPUGenerator.h, ATen/CPUHalfStorage.cpp, ATen/CPUH
alfStorage.h, ATen/CPUHalfTensor.cpp, ATen/CPUHalfTensor.h, ATen/CPUHalfType.cpp, ATen/CPUHalfType.h, ATen/CPUIntStorage.cpp, ATen/CPUIntStorage.h, ATen/CPUI
ntTensor.cpp, ATen/CPUIntTensor.h, ATen/CPUIntType.cpp, ATen/CPUIntType.h, ATen/CPULongStorage.cpp, ATen/CPULongStorage.h, ATen/CPULongTensor.cpp, ATen/CPULo
ngTensor.h, ATen/CPULongType.cpp, ATen/CPULongType.h, ATen/CPUShortStorage.cpp, ATen/CPUShortStorage.h, ATen/CPUShortTensor.cpp, ATen/CPUShortTensor.h, ATen/
CPUShortType.cpp, ATen/CPUShortType.h, ATen/CUDAByteStorage.cpp, ATen/CUDAByteStorage.h, ATen/CUDAByteTensor.cpp, ATen/CUDAByteTensor.h, ATen/CUDAByteType.cp
p, ATen/CUDAByteType.h, ATen/CUDACharStorage.cpp, ATen/CUDACharStorage.h, ATen/CUDACharTensor.cpp, ATen/CUDACharTensor.h, ATen/CUDACharType.cpp, ATen/CUDACha
rType.h, ATen/CUDADoubleStorage.cpp, ATen/CUDADoubleStorage.h, ATen/CUDADoubleTensor.cpp, ATen/CUDADoubleTensor.h, ATen/CUDADoubleType.cpp, ATen/CUDADoubleTy
pe.h, ATen/CUDAFloatStorage.cpp, ATen/CUDAFloatStorage.h, ATen/CUDAFloatTensor.cpp, ATen/CUDAFloatTensor.h, ATen/CUDAFloatType.cpp, ATen/CUDAFloatType.h, ATe
n/CUDAGenerator.h, ATen/CUDAHalfStorage.cpp, ATen/CUDAHalfStorage.h, ATen/CUDAHalfTensor.cpp, ATen/CUDAHalfTensor.h, ATen/CUDAHalfType.cpp, ATen/CUDAHalfType
.h, ATen/CUDAIntStorage.cpp, ATen/CUDAIntStorage.h, ATen/CUDAIntTensor.cpp, ATen/CUDAIntTensor.h, ATen/CUDAIntType.cpp, ATen/CUDAIntType.h, ATen/CUDALongStor
age.cpp, ATen/CUDALongStorage.h, ATen/CUDALongTensor.cpp, ATen/CUDALongTensor.h, ATen/CUDALongType.cpp, ATen/CUDALongType.h, ATen/CUDAShortStorage.cpp, ATen/
CUDAShortStorage.h, ATen/CUDAShortTensor.cpp, ATen/CUDAShortTensor.h, ATen/CUDAShortType.cpp, ATen/CUDAShortType.h, ATen/Copy.cpp, ATen/Declarations.yaml, AT
en/Dispatch.h, ATen/Functions.h, ATen/NativeFunctions.h, ATen/SparseCPUByteTensor.cpp, ATen/SparseCPUByteTensor.h, ATen/SparseCPUByteType.cpp, ATen/SparseCPU
ByteType.h, ATen/SparseCPUCharTensor.cpp, ATen/SparseCPUCharTensor.h, ATen/SparseCPUCharType.cpp, ATen/SparseCPUCharType.h, ATen/SparseCPUDoubleTensor.cpp, A
Ten/SparseCPUDoubleTensor.h, ATen/SparseCPUDoubleType.cpp, ATen/SparseCPUDoubleType.h, ATen/SparseCPUFloatTensor.cpp, ATen/SparseCPUFloatTensor.h, ATen/Spars
eCPUFloatType.cpp, ATen/SparseCPUFloatType.h, ATen/SparseCPUIntTensor.cpp, ATen/SparseCPUIntTensor.h, ATen/SparseCPUIntType.cpp, ATen/SparseCPUIntType.h, ATe
n/SparseCPULongTensor.cpp, ATen/SparseCPULongTensor.h, ATen/SparseCPULongType.cpp, ATen/SparseCPULongType.h, ATen/SparseCPUShortTensor.cpp, ATen/SparseCPUSho
rtTensor.h, ATen/SparseCPUShortType.cpp, ATen/SparseCPUShortType.h, ATen/SparseCUDAByteTensor.cpp, ATen/SparseCUDAByteTensor.h, ATen/SparseCUDAByteType.cpp,
ATen/SparseCUDAByteType.h, ATen/SparseCUDACharTensor.cpp, ATen/SparseCUDACharTensor.h, ATen/SparseCUDACharType.cpp, ATen/SparseCUDACharType.h, ATen/SparseCUD
ADoubleTensor.cpp, ATen/SparseCUDADoubleTensor.h, ATen/SparseCUDADoubleType.cpp, ATen/SparseCUDADoubleType.h, ATen/SparseCUDAFloatTensor.cpp, ATen/SparseCUDA
FloatTensor.h, ATen/SparseCUDAFloatType.cpp, ATen/SparseCUDAFloatType.h, ATen/SparseCUDAIntTensor.cpp, ATen/SparseCUDAIntTensor.h, ATen/SparseCUDAIntType.cpp
, ATen/SparseCUDAIntType.h, ATen/SparseCUDALongTensor.cpp, ATen/SparseCUDALongTensor.h, ATen/SparseCUDALongType.cpp, ATen/SparseCUDALongType.h, ATen/SparseCU
DAShortTensor.cpp, ATen/SparseCUDAShortTensor.h, ATen/SparseCUDAShortType.cpp, ATen/SparseCUDAShortType.h, ATen/Tensor.h, ATen/TensorMethods.h, ATen/Type.cpp
, ATen/Type.h
ATen Excluded: {'bernoulli_', 'bernoulli'}
FinalizeBuildStatus:
Deleting file "Win32\Release\aten_files_are_generated\aten_fil.DA4C4DD4.tlog\unsuccessfulbuild".
Touching "Win32\Release\aten_files_are_generated\aten_fil.DA4C4DD4.tlog\aten_files_are_generated.lastbuildstate".
Done Building Project "D:\pytorch\pytorch\torch\lib\build\ATen\src\ATen\aten_files_are_generated.vcxproj" (default targets).
PrepareForBuild:
Creating directory "ATen.dir\Release\".
Creating directory "ATen.dir\Release\ATen.tlog\".
InitializeBuildStatus:
Creating "ATen.dir\Release\ATen.tlog\unsuccessfulbuild" because "AlwaysCreate" was specified.
ComputeCustomBuildOutput:
Creating directory "D:\pytorch\pytorch\torch\lib\build\ATen\src\ATen\CMakeFiles\ATen.dir\__\THC\Release\".
Creating directory "D:\pytorch\pytorch\torch\lib\build\ATen\src\ATen\CMakeFiles\ATen.dir\__\THC\generated\Release\".
Creating directory "D:\pytorch\pytorch\torch\lib\build\ATen\src\ATen\CMakeFiles\ATen.dir\__\THCUNN\Release\".
Creating directory "D:\pytorch\pytorch\torch\lib\build\ATen\src\ATen\CMakeFiles\ATen.dir\__\THCS\Release\".
Creating directory "D:\pytorch\pytorch\torch\lib\build\ATen\src\ATen\CMakeFiles\ATen.dir\native\cuda\Release\".
CustomBuild:
Building Custom Rule D:/pytorch/pytorch/aten/src/ATen/CMakeLists.txt
CMake does not need to re-run because D:/pytorch/pytorch/torch/lib/build/ATen/src/ATen/CMakeFiles/generate.stamp is up-to-date.
Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/Release/ATen_generated_THCReduceApplyUtils.cu.obj
nvcc fatal : 32 bit compilation is only supported for Microsoft Visual Studio 2013 and earlier
CMake Error at ATen_generated_THCReduceApplyUtils.cu.obj.cmake:207 (message):
Error generating
D:/pytorch/pytorch/torch/lib/build/ATen/src/ATen/CMakeFiles/ATen.dir/__/THC/Release/ATen_generated_THCReduceApplyUtils.cu.obj
C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\Common7\IDE\VC\VCTargets\Microsoft.CppCommon.targets(171,5): error MSB6006: "cmd.exe" exited with
code 1. [D:\pytorch\pytorch\torch\lib\build\ATen\src\ATen\ATen.vcxproj]
Done Building Project "D:\pytorch\pytorch\torch\lib\build\ATen\src\ATen\ATen.vcxproj" (default targets) -- FAILED.
Done Building Project "D:\pytorch\pytorch\torch\lib\build\ATen\ALL_BUILD.vcxproj" (default targets) -- FAILED.
Done Building Project "D:\pytorch\pytorch\torch\lib\build\ATen\INSTALL.vcxproj" (default targets) -- FAILED.
Build FAILED.
"D:\pytorch\pytorch\torch\lib\build\ATen\INSTALL.vcxproj" (default target) (1) ->
"D:\pytorch\pytorch\torch\lib\build\ATen\ALL_BUILD.vcxproj" (default target) (3) ->
"D:\pytorch\pytorch\torch\lib\build\ATen\src\ATen\ATen.vcxproj" (default target) (4) ->
(CustomBuild target) ->
C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\Common7\IDE\VC\VCTargets\Microsoft.CppCommon.targets(171,5): error MSB6006: "cmd.exe" exited wi
th code 1. [D:\pytorch\pytorch\torch\lib\build\ATen\src\ATen\ATen.vcxproj]
0 Warning(s)
1 Error(s)
Time Elapsed 00:00:05.32
-- Building for: Visual Studio 15 2017
-- The C compiler identification is MSVC 19.11.25547.0
-- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.11.25503/bin/Hostx86/x86/cl.exe
-- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.11.25503/bin/Hostx86/x86/cl.exe -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Configuring done
-- Generating done
CMake Warning:
Manually-specified variables were not used by the project:
ATEN_LIBRARIES
CMAKE_BUILD_TYPE
CMAKE_CXX_FLAGS
CUDA_NVCC_FLAGS
NO_CUDA
THCS_LIBRARIES
THCUNN_LIBRARIES
THCUNN_SO_VERSION
THC_LIBRARIES
THC_SO_VERSION
THNN_LIBRARIES
THNN_SO_VERSION
THS_LIBRARIES
TH_INCLUDE_PATH
TH_LIBRARIES
TH_LIB_PATH
TH_SO_VERSION
Torch_FOUND
cwrap_files
-- Build files have been written to: D:/pytorch/pytorch/torch/lib/build/nanopb
Microsoft (R) Build Engine version 15.4.8.50001 for .NET Framework
Copyright (C) Microsoft Corporation. All rights reserved.
Build started 12/5/2017 3:38:00 AM.
Project "D:\pytorch\pytorch\torch\lib\build\nanopb\INSTALL.vcxproj" on node 1 (default targets).
Project "D:\pytorch\pytorch\torch\lib\build\nanopb\INSTALL.vcxproj" (1) is building "D:\pytorch\pytorch\torch\lib\build\nanopb\ZERO_CHECK.vcxproj" (2) on node
1 (default targets).
PrepareForBuild:
Creating directory "Win32\Release\ZERO_CHECK\".
Creating directory "D:\pytorch\pytorch\torch\lib\build\nanopb\Release\".
Creating directory "Win32\Release\ZERO_CHECK\ZERO_CHECK.tlog\".
InitializeBuildStatus:
Creating "Win32\Release\ZERO_CHECK\ZERO_CHECK.tlog\unsuccessfulbuild" because "AlwaysCreate" was specified.
CustomBuild:
Checking Build System
CMake does not need to re-run because D:/pytorch/pytorch/torch/lib/build/nanopb/CMakeFiles/generate.stamp is up-to-date.
FinalizeBuildStatus:
Deleting file "Win32\Release\ZERO_CHECK\ZERO_CHECK.tlog\unsuccessfulbuild".
Touching "Win32\Release\ZERO_CHECK\ZERO_CHECK.tlog\ZERO_CHECK.lastbuildstate".
Done Building Project "D:\pytorch\pytorch\torch\lib\build\nanopb\ZERO_CHECK.vcxproj" (default targets).
Project "D:\pytorch\pytorch\torch\lib\build\nanopb\INSTALL.vcxproj" (1) is building "D:\pytorch\pytorch\torch\lib\build\nanopb\ALL_BUILD.vcxproj" (3) on node 1
(default targets).
Project "D:\pytorch\pytorch\torch\lib\build\nanopb\ALL_BUILD.vcxproj" (3) is building "D:\pytorch\pytorch\torch\lib\build\nanopb\protobuf-nanopb.vcxproj" (4) o
n node 1 (default targets).
PrepareForBuild:
Creating directory "protobuf-nanopb.dir\Release\".
Creating directory "protobuf-nanopb.dir\Release\protobuf-nanopb.tlog\".
InitializeBuildStatus:
Creating "protobuf-nanopb.dir\Release\protobuf-nanopb.tlog\unsuccessfulbuild" because "AlwaysCreate" was specified.
CustomBuild:
Building Custom Rule D:/pytorch/pytorch/torch/lib/nanopb/CMakeLists.txt
CMake does not need to re-run because D:/pytorch/pytorch/torch/lib/build/nanopb/CMakeFiles/generate.stamp is up-to-date.
ClCompile:
C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.11.25503\bin\HostX86\x86\CL.exe /c /ID:/pytorch/pytorch/torch/lib/tmp_install/
include /ID:/pytorch/pytorch/torch/lib/tmp_install/include/TH /ID:/pytorch/pytorch/torch/lib/tmp_install/include/THC /ID:/pytorch/pytorch/torch/lib/tmp_insta
ll/include/THS /I/include/THCS /I/include/THPP /I/include/THNN /I/include/THCUNN /Z7 /nologo /W1 /WX- /diagnostics:classic /O2 /Ob2 /Oy- /D TH_INDEX_BASE=0 /
D _WIN32 /D NOMINMAX /D NDEBUG /D "CMAKE_INTDIR=\"Release\"" /D _MBCS /Gm- /EHa /MT /GS /fp:precise /Zc:wchar_t /Zc:forScope /Zc:inline /Fo"protobuf-nanopb.d
ir\Release\\" /Fd"protobuf-nanopb.dir\Release\protobuf-nanopb.pdb" /Gd /TC /analyze- /errorReport:queue D:\pytorch\pytorch\torch\lib\nanopb\pb_common.c D:\py
torch\pytorch\torch\lib\nanopb\pb_encode.c D:\pytorch\pytorch\torch\lib\nanopb\pb_decode.c
pb_common.c
pb_encode.c
pb_decode.c
Generating Code...
Lib:
C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.11.25503\bin\HostX86\x86\Lib.exe /OUT:"D:\pytorch\pytorch\torch\lib\build\nano
pb\Release\protobuf-nanopb.lib" /NOLOGO /machine:X86 "protobuf-nanopb.dir\Release\pb_common.obj"
"protobuf-nanopb.dir\Release\pb_encode.obj"
"protobuf-nanopb.dir\Release\pb_decode.obj"
protobuf-nanopb.vcxproj -> D:\pytorch\pytorch\torch\lib\build\nanopb\Release\protobuf-nanopb.lib
FinalizeBuildStatus:
Deleting file "protobuf-nanopb.dir\Release\protobuf-nanopb.tlog\unsuccessfulbuild".
Touching "protobuf-nanopb.dir\Release\protobuf-nanopb.tlog\protobuf-nanopb.lastbuildstate".
Done Building Project "D:\pytorch\pytorch\torch\lib\build\nanopb\protobuf-nanopb.vcxproj" (default targets).
PrepareForBuild:
Creating directory "Win32\Release\ALL_BUILD\".
Creating directory "Win32\Release\ALL_BUILD\ALL_BUILD.tlog\".
InitializeBuildStatus:
Creating "Win32\Release\ALL_BUILD\ALL_BUILD.tlog\unsuccessfulbuild" because "AlwaysCreate" was specified.
CustomBuild:
Building Custom Rule D:/pytorch/pytorch/torch/lib/nanopb/CMakeLists.txt
CMake does not need to re-run because D:/pytorch/pytorch/torch/lib/build/nanopb/CMakeFiles/generate.stamp is up-to-date.
FinalizeBuildStatus:
Deleting file "Win32\Release\ALL_BUILD\ALL_BUILD.tlog\unsuccessfulbuild".
Touching "Win32\Release\ALL_BUILD\ALL_BUILD.tlog\ALL_BUILD.lastbuildstate".
Done Building Project "D:\pytorch\pytorch\torch\lib\build\nanopb\ALL_BUILD.vcxproj" (default targets).
PrepareForBuild:
Creating directory "Win32\Release\INSTALL\".
Creating directory "Win32\Release\INSTALL\INSTALL.tlog\".
InitializeBuildStatus:
Creating "Win32\Release\INSTALL\INSTALL.tlog\unsuccessfulbuild" because "AlwaysCreate" was specified.
PostBuildEvent:
setlocal
"C:\Program Files\CMake\bin\cmake.exe" -DBUILD_TYPE=Release -P cmake_install.cmake
if %errorlevel% neq 0 goto :cmEnd
:cmEnd
endlocal & call :cmErrorLevel %errorlevel% & goto :cmDone
:cmErrorLevel
exit /b %1
:cmDone
if %errorlevel% neq 0 goto :VCEnd
:VCEnd
-- Install configuration: "Release"
-- Installing: D:/pytorch/pytorch/torch/lib/tmp_install/lib/protobuf-nanopb.lib
-- Installing: D:/pytorch/pytorch/torch/lib/tmp_install/lib/cmake/nanopb/nanopb-targets.cmake
-- Installing: D:/pytorch/pytorch/torch/lib/tmp_install/lib/cmake/nanopb/nanopb-targets-release.cmake
-- Installing: D:/pytorch/pytorch/torch/lib/tmp_install/lib/cmake/nanopb/nanopb-config.cmake
-- Installing: D:/pytorch/pytorch/torch/lib/tmp_install/lib/cmake/nanopb/nanopb-config-version.cmake
-- Installing: D:/pytorch/pytorch/torch/lib/tmp_install/include/pb.h
-- Installing: D:/pytorch/pytorch/torch/lib/tmp_install/include/pb_common.h
-- Installing: D:/pytorch/pytorch/torch/lib/tmp_install/include/pb_encode.h
-- Installing: D:/pytorch/pytorch/torch/lib/tmp_install/include/pb_decode.h
FinalizeBuildStatus:
Deleting file "Win32\Release\INSTALL\INSTALL.tlog\unsuccessfulbuild".
Touching "Win32\Release\INSTALL\INSTALL.tlog\INSTALL.lastbuildstate".
Done Building Project "D:\pytorch\pytorch\torch\lib\build\nanopb\INSTALL.vcxproj" (default targets).
Build succeeded.
0 Warning(s)
0 Error(s)
Time Elapsed 00:00:01.14
-- Building for: Visual Studio 15 2017
-- The C compiler identification is MSVC 19.11.25547.0
-- The CXX compiler identification is MSVC 19.11.25547.0
-- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.11.25503/bin/Hostx86/x86/cl.exe
-- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.11.25503/bin/Hostx86/x86/cl.exe -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.11.25503/bin/Hostx86/x86/cl.exe
-- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.11.25503/bin/Hostx86/x86/cl.exe -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Configuring done
-- Generating done
CMake Warning:
Manually-specified variables were not used by the project:
ATEN_LIBRARIES
CMAKE_BUILD_TYPE
CUDA_NVCC_FLAGS
NO_CUDA
THCS_LIBRARIES
THCUNN_LIBRARIES
THCUNN_SO_VERSION
THC_LIBRARIES
THC_SO_VERSION
THNN_LIBRARIES
THNN_SO_VERSION
THS_LIBRARIES
TH_INCLUDE_PATH
TH_LIB_PATH
TH_SO_VERSION
Torch_FOUND
cwrap_files
nanopb_BUILD_GENERATOR
-- Build files have been written to: D:/pytorch/pytorch/torch/lib/build/libshm_windows
Microsoft (R) Build Engine version 15.4.8.50001 for .NET Framework
Copyright (C) Microsoft Corporation. All rights reserved.
Build started 12/5/2017 3:38:10 AM.
Project "D:\pytorch\pytorch\torch\lib\build\libshm_windows\INSTALL.vcxproj" on node 1 (default targets).
Project "D:\pytorch\pytorch\torch\lib\build\libshm_windows\INSTALL.vcxproj" (1) is building "D:\pytorch\pytorch\torch\lib\build\libshm_windows\ZERO_CHECK.vcxpr
oj" (2) on node 1 (default targets).
PrepareForBuild:
Creating directory "Win32\Release\ZERO_CHECK\".
Creating directory "D:\pytorch\pytorch\torch\lib\build\libshm_windows\Release\".
Creating directory "Win32\Release\ZERO_CHECK\ZERO_CHECK.tlog\".
InitializeBuildStatus:
Creating "Win32\Release\ZERO_CHECK\ZERO_CHECK.tlog\unsuccessfulbuild" because "AlwaysCreate" was specified.
CustomBuild:
Checking Build System
CMake does not need to re-run because D:/pytorch/pytorch/torch/lib/build/libshm_windows/CMakeFiles/generate.stamp is up-to-date.
FinalizeBuildStatus:
Deleting file "Win32\Release\ZERO_CHECK\ZERO_CHECK.tlog\unsuccessfulbuild".
Touching "Win32\Release\ZERO_CHECK\ZERO_CHECK.tlog\ZERO_CHECK.lastbuildstate".
Done Building Project "D:\pytorch\pytorch\torch\lib\build\libshm_windows\ZERO_CHECK.vcxproj" (default targets).
Project "D:\pytorch\pytorch\torch\lib\build\libshm_windows\INSTALL.vcxproj" (1) is building "D:\pytorch\pytorch\torch\lib\build\libshm_windows\ALL_BUILD.vcxpro
j" (3) on node 1 (default targets).
Project "D:\pytorch\pytorch\torch\lib\build\libshm_windows\ALL_BUILD.vcxproj" (3) is building "D:\pytorch\pytorch\torch\lib\build\libshm_windows\shm.vcxproj" (
4) on node 1 (default targets).
PrepareForBuild:
Creating directory "shm.dir\Release\".
Creating directory "shm.dir\Release\shm.tlog\".
InitializeBuildStatus:
Creating "shm.dir\Release\shm.tlog\unsuccessfulbuild" because "AlwaysCreate" was specified.
CustomBuild:
Building Custom Rule D:/pytorch/pytorch/torch/lib/libshm_windows/CMakeLists.txt
CMake does not need to re-run because D:/pytorch/pytorch/torch/lib/build/libshm_windows/CMakeFiles/generate.stamp is up-to-date.
ClCompile:
C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.11.25503\bin\HostX86\x86\CL.exe /c /ID:/pytorch/pytorch/torch/lib/tmp_install/
include /ID:/pytorch/pytorch/torch/lib/tmp_install/include/TH /ID:/pytorch/pytorch/torch/lib/tmp_install/include/THC /ID:/pytorch/pytorch/torch/lib/tmp_insta
ll/include/THS /I/include/THCS /I/include/THPP /I/include/THNN /I/include/THCUNN /ID:\pytorch\pytorch\torch\lib\libshm_windows /Z7 /nologo /W1 /WX- /diagnost
ics:classic /O2 /Ob2 /Oy- /D TH_INDEX_BASE=0 /D _WIN32 /D NOMINMAX /D NDEBUG /D _CRT_SECURE_NO_DEPRECATE=1 /D SHM_EXPORTS /D "CMAKE_INTDIR=\"Release\"" /D sh
m_EXPORTS /D _WINDLL /D _MBCS /Gm- /EHa /MD /GS /fp:precise /Zc:wchar_t /Zc:forScope /Zc:inline /Fo"shm.dir\Release\\" /Fd"shm.dir\Release\vc141.pdb" /Gd /TP
/analyze- /errorReport:queue D:\pytorch\pytorch\torch\lib\libshm_windows\core.cpp
core.cpp
D:\pytorch\pytorch\torch\lib\libshm_windows\core.cpp(5): fatal error C1083: Cannot open include file: 'TH/TH.h': No such file or directory [D:\pytorch\pytorch\
torch\lib\build\libshm_windows\shm.vcxproj]
Done Building Project "D:\pytorch\pytorch\torch\lib\build\libshm_windows\shm.vcxproj" (default targets) -- FAILED.
Done Building Project "D:\pytorch\pytorch\torch\lib\build\libshm_windows\ALL_BUILD.vcxproj" (default targets) -- FAILED.
Done Building Project "D:\pytorch\pytorch\torch\lib\build\libshm_windows\INSTALL.vcxproj" (default targets) -- FAILED.
Build FAILED.
"D:\pytorch\pytorch\torch\lib\build\libshm_windows\INSTALL.vcxproj" (default target) (1) ->
"D:\pytorch\pytorch\torch\lib\build\libshm_windows\ALL_BUILD.vcxproj" (default target) (3) ->
"D:\pytorch\pytorch\torch\lib\build\libshm_windows\shm.vcxproj" (default target) (4) ->
(ClCompile target) ->
D:\pytorch\pytorch\torch\lib\libshm_windows\core.cpp(5): fatal error C1083: Cannot open include file: 'TH/TH.h': No such file or directory [D:\pytorch\pytorc
h\torch\lib\build\libshm_windows\shm.vcxproj]
0 Warning(s)
1 Error(s)
Time Elapsed 00:00:01.11
tmp_install\lib\protobuf-nanopb.lib
1 file(s) copied.
tmp_install\include\pb.h
tmp_install\include\pb_common.h
tmp_install\include\pb_decode.h
tmp_install\include\pb_encode.h
4 File(s) copied
..\..\aten\src\THNN\generic\THNN.h
1 File(s) copied
..\..\aten\src\THCUNN\generic\THCUNN.h
1 File(s) copied
running build
running build_py
-- Building version 0.4.0a0+84d8e81
creating build
creating build\lib.win-amd64-3.6
creating build\lib.win-amd64-3.6\torch
copying torch\distributions.py -> build\lib.win-amd64-3.6\torch
copying torch\functional.py -> build\lib.win-amd64-3.6\torch
copying torch\random.py -> build\lib.win-amd64-3.6\torch
copying torch\serialization.py -> build\lib.win-amd64-3.6\torch
copying torch\storage.py -> build\lib.win-amd64-3.6\torch
copying torch\tensor.py -> build\lib.win-amd64-3.6\torch
copying torch\version.py -> build\lib.win-amd64-3.6\torch
copying torch\_six.py -> build\lib.win-amd64-3.6\torch
copying torch\_storage_docs.py -> build\lib.win-amd64-3.6\torch
copying torch\_tensor_docs.py -> build\lib.win-amd64-3.6\torch
copying torch\_tensor_str.py -> build\lib.win-amd64-3.6\torch
copying torch\_torch_docs.py -> build\lib.win-amd64-3.6\torch
copying torch\_utils.py -> build\lib.win-amd64-3.6\torch
copying torch\__init__.py -> build\lib.win-amd64-3.6\torch
creating build\lib.win-amd64-3.6\torch\autograd
copying torch\autograd\function.py -> build\lib.win-amd64-3.6\torch\autograd
copying torch\autograd\gradcheck.py -> build\lib.win-amd64-3.6\torch\autograd
copying torch\autograd\profiler.py -> build\lib.win-amd64-3.6\torch\autograd
copying torch\autograd\variable.py -> build\lib.win-amd64-3.6\torch\autograd
copying torch\autograd\__init__.py -> build\lib.win-amd64-3.6\torch\autograd
creating build\lib.win-amd64-3.6\torch\backends
copying torch\backends\__init__.py -> build\lib.win-amd64-3.6\torch\backends
creating build\lib.win-amd64-3.6\torch\contrib
copying torch\contrib\_graph_vis.py -> build\lib.win-amd64-3.6\torch\contrib
copying torch\contrib\__init__.py -> build\lib.win-amd64-3.6\torch\contrib
creating build\lib.win-amd64-3.6\torch\cuda
copying torch\cuda\comm.py -> build\lib.win-amd64-3.6\torch\cuda
copying torch\cuda\error.py -> build\lib.win-amd64-3.6\torch\cuda
copying torch\cuda\nccl.py -> build\lib.win-amd64-3.6\torch\cuda
copying torch\cuda\nvtx.py -> build\lib.win-amd64-3.6\torch\cuda
copying torch\cuda\profiler.py -> build\lib.win-amd64-3.6\torch\cuda
copying torch\cuda\random.py -> build\lib.win-amd64-3.6\torch\cuda
copying torch\cuda\sparse.py -> build\lib.win-amd64-3.6\torch\cuda
copying torch\cuda\streams.py -> build\lib.win-amd64-3.6\torch\cuda
copying torch\cuda\__init__.py -> build\lib.win-amd64-3.6\torch\cuda
creating build\lib.win-amd64-3.6\torch\distributed
copying torch\distributed\remote_types.py -> build\lib.win-amd64-3.6\torch\distributed
copying torch\distributed\__init__.py -> build\lib.win-amd64-3.6\torch\distributed
creating build\lib.win-amd64-3.6\torch\for_onnx
copying torch\for_onnx\__init__.py -> build\lib.win-amd64-3.6\torch\for_onnx
creating build\lib.win-amd64-3.6\torch\jit
copying torch\jit\__init__.py -> build\lib.win-amd64-3.6\torch\jit
creating build\lib.win-amd64-3.6\torch\legacy
copying torch\legacy\__init__.py -> build\lib.win-amd64-3.6\torch\legacy
creating build\lib.win-amd64-3.6\torch\multiprocessing
copying torch\multiprocessing\pool.py -> build\lib.win-amd64-3.6\torch\multiprocessing
copying torch\multiprocessing\queue.py -> build\lib.win-amd64-3.6\torch\multiprocessing
copying torch\multiprocessing\reductions.py -> build\lib.win-amd64-3.6\torch\multiprocessing
copying torch\multiprocessing\__init__.py -> build\lib.win-amd64-3.6\torch\multiprocessing
creating build\lib.win-amd64-3.6\torch\nn
copying torch\nn\functional.py -> build\lib.win-amd64-3.6\torch\nn
copying torch\nn\init.py -> build\lib.win-amd64-3.6\torch\nn
copying torch\nn\parameter.py -> build\lib.win-amd64-3.6\torch\nn
copying torch\nn\__init__.py -> build\lib.win-amd64-3.6\torch\nn
creating build\lib.win-amd64-3.6\torch\onnx
copying torch\onnx\symbolic.py -> build\lib.win-amd64-3.6\torch\onnx
copying torch\onnx\__init__.py -> build\lib.win-amd64-3.6\torch\onnx
creating build\lib.win-amd64-3.6\torch\optim
copying torch\optim\adadelta.py -> build\lib.win-amd64-3.6\torch\optim
copying torch\optim\adagrad.py -> build\lib.win-amd64-3.6\torch\optim
copying torch\optim\adam.py -> build\lib.win-amd64-3.6\torch\optim
copying torch\optim\adamax.py -> build\lib.win-amd64-3.6\torch\optim
copying torch\optim\asgd.py -> build\lib.win-amd64-3.6\torch\optim
copying torch\optim\lbfgs.py -> build\lib.win-amd64-3.6\torch\optim
copying torch\optim\lr_scheduler.py -> build\lib.win-amd64-3.6\torch\optim
copying torch\optim\optimizer.py -> build\lib.win-amd64-3.6\torch\optim
copying torch\optim\rmsprop.py -> build\lib.win-amd64-3.6\torch\optim
copying torch\optim\rprop.py -> build\lib.win-amd64-3.6\torch\optim
copying torch\optim\sgd.py -> build\lib.win-amd64-3.6\torch\optim
copying torch\optim\sparse_adam.py -> build\lib.win-amd64-3.6\torch\optim
copying torch\optim\__init__.py -> build\lib.win-amd64-3.6\torch\optim
creating build\lib.win-amd64-3.6\torch\sparse
copying torch\sparse\__init__.py -> build\lib.win-amd64-3.6\torch\sparse
creating build\lib.win-amd64-3.6\torch\utils
copying torch\utils\dlpack.py -> build\lib.win-amd64-3.6\torch\utils
copying torch\utils\hooks.py -> build\lib.win-amd64-3.6\torch\utils
copying torch\utils\model_zoo.py -> build\lib.win-amd64-3.6\torch\utils
copying torch\utils\__init__.py -> build\lib.win-amd64-3.6\torch\utils
creating build\lib.win-amd64-3.6\torch\_thnn
copying torch\_thnn\utils.py -> build\lib.win-amd64-3.6\torch\_thnn
copying torch\_thnn\__init__.py -> build\lib.win-amd64-3.6\torch\_thnn
creating build\lib.win-amd64-3.6\torch\autograd\_functions
copying torch\autograd\_functions\basic_ops.py -> build\lib.win-amd64-3.6\torch\autograd\_functions
copying torch\autograd\_functions\tensor.py -> build\lib.win-amd64-3.6\torch\autograd\_functions
copying torch\autograd\_functions\utils.py -> build\lib.win-amd64-3.6\torch\autograd\_functions
copying torch\autograd\_functions\__init__.py -> build\lib.win-amd64-3.6\torch\autograd\_functions
creating build\lib.win-amd64-3.6\torch\backends\cudnn
copying torch\backends\cudnn\rnn.py -> build\lib.win-amd64-3.6\torch\backends\cudnn
copying torch\backends\cudnn\__init__.py -> build\lib.win-amd64-3.6\torch\backends\cudnn
creating build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Abs.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\AbsCriterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Add.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\AddConstant.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\BatchNormalization.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\BCECriterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Bilinear.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\CAddTable.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\CDivTable.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Clamp.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\ClassNLLCriterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\ClassSimplexCriterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\CMul.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\CMulTable.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Concat.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\ConcatTable.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Container.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Contiguous.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Copy.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Cosine.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\CosineDistance.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\CosineEmbeddingCriterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Criterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\CriterionTable.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\CrossEntropyCriterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\CSubTable.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\DepthConcat.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\DistKLDivCriterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\DotProduct.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Dropout.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\ELU.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Euclidean.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Exp.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\FlattenTable.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\GradientReversal.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\HardShrink.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\HardTanh.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\HingeEmbeddingCriterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Identity.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Index.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\JoinTable.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\L1Cost.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\L1HingeEmbeddingCriterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\L1Penalty.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\LeakyReLU.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Linear.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Log.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\LogSigmoid.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\LogSoftMax.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\LookupTable.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\MarginCriterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\MarginRankingCriterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\MaskedSelect.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Max.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Mean.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Min.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\MixtureTable.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\MM.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Module.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\MSECriterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Mul.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\MulConstant.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\MultiCriterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\MultiLabelMarginCriterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\MultiLabelSoftMarginCriterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\MultiMarginCriterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\MV.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Narrow.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\NarrowTable.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Normalize.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Padding.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\PairwiseDistance.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Parallel.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\ParallelCriterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\ParallelTable.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\PartialLinear.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Power.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\PReLU.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\ReLU.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\ReLU6.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Replicate.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Reshape.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\RReLU.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Select.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SelectTable.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Sequential.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Sigmoid.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SmoothL1Criterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SoftMarginCriterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SoftMax.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SoftMin.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SoftPlus.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SoftShrink.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SoftSign.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialAdaptiveMaxPooling.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialAveragePooling.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialBatchNormalization.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialClassNLLCriterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialContrastiveNormalization.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialConvolution.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialConvolutionLocal.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialConvolutionMap.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialCrossMapLRN.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialDilatedConvolution.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialDivisiveNormalization.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialDropout.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialFractionalMaxPooling.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialFullConvolution.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialFullConvolutionMap.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialLPPooling.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialMaxPooling.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialMaxUnpooling.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialReflectionPadding.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialReplicationPadding.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialSoftMax.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialSubSampling.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialSubtractiveNormalization.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialUpSamplingNearest.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SpatialZeroPadding.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\SplitTable.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Sqrt.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Square.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Squeeze.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Sum.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Tanh.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\TanhShrink.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\TemporalConvolution.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\TemporalMaxPooling.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\TemporalSubSampling.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Threshold.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Transpose.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\Unsqueeze.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\utils.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\View.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\VolumetricAveragePooling.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\VolumetricBatchNormalization.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\VolumetricConvolution.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\VolumetricDropout.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\VolumetricFullConvolution.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\VolumetricMaxPooling.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\VolumetricMaxUnpooling.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\VolumetricReplicationPadding.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\WeightedEuclidean.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\WeightedMSECriterion.py -> build\lib.win-amd64-3.6\torch\legacy\nn
copying torch\legacy\nn\__init__.py -> build\lib.win-amd64-3.6\torch\legacy\nn
creating build\lib.win-amd64-3.6\torch\legacy\optim
copying torch\legacy\optim\adadelta.py -> build\lib.win-amd64-3.6\torch\legacy\optim
copying torch\legacy\optim\adagrad.py -> build\lib.win-amd64-3.6\torch\legacy\optim
copying torch\legacy\optim\adam.py -> build\lib.win-amd64-3.6\torch\legacy\optim
copying torch\legacy\optim\adamax.py -> build\lib.win-amd64-3.6\torch\legacy\optim
copying torch\legacy\optim\asgd.py -> build\lib.win-amd64-3.6\torch\legacy\optim
copying torch\legacy\optim\cg.py -> build\lib.win-amd64-3.6\torch\legacy\optim
copying torch\legacy\optim\lbfgs.py -> build\lib.win-amd64-3.6\torch\legacy\optim
copying torch\legacy\optim\nag.py -> build\lib.win-amd64-3.6\torch\legacy\optim
copying torch\legacy\optim\rmsprop.py -> build\lib.win-amd64-3.6\torch\legacy\optim
copying torch\legacy\optim\rprop.py -> build\lib.win-amd64-3.6\torch\legacy\optim
copying torch\legacy\optim\sgd.py -> build\lib.win-amd64-3.6\torch\legacy\optim
copying torch\legacy\optim\__init__.py -> build\lib.win-amd64-3.6\torch\legacy\optim
creating build\lib.win-amd64-3.6\torch\nn\backends
copying torch\nn\backends\backend.py -> build\lib.win-amd64-3.6\torch\nn\backends
copying torch\nn\backends\thnn.py -> build\lib.win-amd64-3.6\torch\nn\backends
copying torch\nn\backends\__init__.py -> build\lib.win-amd64-3.6\torch\nn\backends
creating build\lib.win-amd64-3.6\torch\nn\modules
copying torch\nn\modules\activation.py -> build\lib.win-amd64-3.6\torch\nn\modules
copying torch\nn\modules\batchnorm.py -> build\lib.win-amd64-3.6\torch\nn\modules
copying torch\nn\modules\container.py -> build\lib.win-amd64-3.6\torch\nn\modules
copying torch\nn\modules\conv.py -> build\lib.win-amd64-3.6\torch\nn\modules
copying torch\nn\modules\distance.py -> build\lib.win-amd64-3.6\torch\nn\modules
copying torch\nn\modules\dropout.py -> build\lib.win-amd64-3.6\torch\nn\modules
copying torch\nn\modules\instancenorm.py -> build\lib.win-amd64-3.6\torch\nn\modules
copying torch\nn\modules\linear.py -> build\lib.win-amd64-3.6\torch\nn\modules
copying torch\nn\modules\loss.py -> build\lib.win-amd64-3.6\torch\nn\modules
copying torch\nn\modules\module.py -> build\lib.win-amd64-3.6\torch\nn\modules
copying torch\nn\modules\normalization.py -> build\lib.win-amd64-3.6\torch\nn\modules
copying torch\nn\modules\padding.py -> build\lib.win-amd64-3.6\torch\nn\modules
copying torch\nn\modules\pixelshuffle.py -> build\lib.win-amd64-3.6\torch\nn\modules
copying torch\nn\modules\pooling.py -> build\lib.win-amd64-3.6\torch\nn\modules
copying torch\nn\modules\rnn.py -> build\lib.win-amd64-3.6\torch\nn\modules
copying torch\nn\modules\sparse.py -> build\lib.win-amd64-3.6\torch\nn\modules
copying torch\nn\modules\upsampling.py -> build\lib.win-amd64-3.6\torch\nn\modules
copying torch\nn\modules\utils.py -> build\lib.win-amd64-3.6\torch\nn\modules
copying torch\nn\modules\__init__.py -> build\lib.win-amd64-3.6\torch\nn\modules
creating build\lib.win-amd64-3.6\torch\nn\parallel
copying torch\nn\parallel\data_parallel.py -> build\lib.win-amd64-3.6\torch\nn\parallel
copying torch\nn\parallel\distributed.py -> build\lib.win-amd64-3.6\torch\nn\parallel
copying torch\nn\parallel\parallel_apply.py -> build\lib.win-amd64-3.6\torch\nn\parallel
copying torch\nn\parallel\replicate.py -> build\lib.win-amd64-3.6\torch\nn\parallel
copying torch\nn\parallel\scatter_gather.py -> build\lib.win-amd64-3.6\torch\nn\parallel
copying torch\nn\parallel\_functions.py -> build\lib.win-amd64-3.6\torch\nn\parallel
copying torch\nn\parallel\__init__.py -> build\lib.win-amd64-3.6\torch\nn\parallel
creating build\lib.win-amd64-3.6\torch\nn\utils
copying torch\nn\utils\clip_grad.py -> build\lib.win-amd64-3.6\torch\nn\utils
copying torch\nn\utils\convert_parameters.py -> build\lib.win-amd64-3.6\torch\nn\utils
copying torch\nn\utils\rnn.py -> build\lib.win-amd64-3.6\torch\nn\utils
copying torch\nn\utils\weight_norm.py -> build\lib.win-amd64-3.6\torch\nn\utils
copying torch\nn\utils\__init__.py -> build\lib.win-amd64-3.6\torch\nn\utils
creating build\lib.win-amd64-3.6\torch\nn\_functions
copying torch\nn\_functions\dropout.py -> build\lib.win-amd64-3.6\torch\nn\_functions
copying torch\nn\_functions\linear.py -> build\lib.win-amd64-3.6\torch\nn\_functions
copying torch\nn\_functions\loss.py -> build\lib.win-amd64-3.6\torch\nn\_functions
copying torch\nn\_functions\padding.py -> build\lib.win-amd64-3.6\torch\nn\_functions
copying torch\nn\_functions\rnn.py -> build\lib.win-amd64-3.6\torch\nn\_functions
copying torch\nn\_functions\vision.py -> build\lib.win-amd64-3.6\torch\nn\_functions
copying torch\nn\_functions\__init__.py -> build\lib.win-amd64-3.6\torch\nn\_functions
creating build\lib.win-amd64-3.6\torch\nn\_functions\thnn
copying torch\nn\_functions\thnn\activation.py -> build\lib.win-amd64-3.6\torch\nn\_functions\thnn
copying torch\nn\_functions\thnn\auto.py -> build\lib.win-amd64-3.6\torch\nn\_functions\thnn
copying torch\nn\_functions\thnn\auto_double_backwards.py -> build\lib.win-amd64-3.6\torch\nn\_functions\thnn
copying torch\nn\_functions\thnn\auto_symbolic.py -> build\lib.win-amd64-3.6\torch\nn\_functions\thnn
copying torch\nn\_functions\thnn\batchnorm_double_backwards.py -> build\lib.win-amd64-3.6\torch\nn\_functions\thnn
copying torch\nn\_functions\thnn\loss.py -> build\lib.win-amd64-3.6\torch\nn\_functions\thnn
copying torch\nn\_functions\thnn\normalization.py -> build\lib.win-amd64-3.6\torch\nn\_functions\thnn
copying torch\nn\_functions\thnn\pooling.py -> build\lib.win-amd64-3.6\torch\nn\_functions\thnn
copying torch\nn\_functions\thnn\rnnFusedPointwise.py -> build\lib.win-amd64-3.6\torch\nn\_functions\thnn
copying torch\nn\_functions\thnn\sparse.py -> build\lib.win-amd64-3.6\torch\nn\_functions\thnn
copying torch\nn\_functions\thnn\upsampling.py -> build\lib.win-amd64-3.6\torch\nn\_functions\thnn
copying torch\nn\_functions\thnn\__init__.py -> build\lib.win-amd64-3.6\torch\nn\_functions\thnn
creating build\lib.win-amd64-3.6\torch\utils\backcompat
copying torch\utils\backcompat\__init__.py -> build\lib.win-amd64-3.6\torch\utils\backcompat
creating build\lib.win-amd64-3.6\torch\utils\data
copying torch\utils\data\dataloader.py -> build\lib.win-amd64-3.6\torch\utils\data
copying torch\utils\data\dataset.py -> build\lib.win-amd64-3.6\torch\utils\data
copying torch\utils\data\distributed.py -> build\lib.win-amd64-3.6\torch\utils\data
copying torch\utils\data\sampler.py -> build\lib.win-amd64-3.6\torch\utils\data
copying torch\utils\data\__init__.py -> build\lib.win-amd64-3.6\torch\utils\data
creating build\lib.win-amd64-3.6\torch\utils\ffi
copying torch\utils\ffi\__init__.py -> build\lib.win-amd64-3.6\torch\utils\ffi
creating build\lib.win-amd64-3.6\torch\utils\serialization
copying torch\utils\serialization\read_lua_file.py -> build\lib.win-amd64-3.6\torch\utils\serialization
copying torch\utils\serialization\__init__.py -> build\lib.win-amd64-3.6\torch\utils\serialization
creating build\lib.win-amd64-3.6\torch\utils\trainer
copying torch\utils\trainer\trainer.py -> build\lib.win-amd64-3.6\torch\utils\trainer
copying torch\utils\trainer\__init__.py -> build\lib.win-amd64-3.6\torch\utils\trainer
creating build\lib.win-amd64-3.6\torch\utils\trainer\plugins
copying torch\utils\trainer\plugins\accuracy.py -> build\lib.win-amd64-3.6\torch\utils\trainer\plugins
copying torch\utils\trainer\plugins\logger.py -> build\lib.win-amd64-3.6\torch\utils\trainer\plugins
copying torch\utils\trainer\plugins\loss.py -> build\lib.win-amd64-3.6\torch\utils\trainer\plugins
copying torch\utils\trainer\plugins\monitor.py -> build\lib.win-amd64-3.6\torch\utils\trainer\plugins
copying torch\utils\trainer\plugins\plugin.py -> build\lib.win-amd64-3.6\torch\utils\trainer\plugins
copying torch\utils\trainer\plugins\progress.py -> build\lib.win-amd64-3.6\torch\utils\trainer\plugins
copying torch\utils\trainer\plugins\time.py -> build\lib.win-amd64-3.6\torch\utils\trainer\plugins
copying torch\utils\trainer\plugins\__init__.py -> build\lib.win-amd64-3.6\torch\utils\trainer\plugins
creating build\lib.win-amd64-3.6\torch\lib
copying torch\lib\THCUNN.h -> build\lib.win-amd64-3.6\torch\lib
copying torch\lib\THNN.h -> build\lib.win-amd64-3.6\torch\lib
running build_ext
-- Building with NumPy bindings
-- Detected cuDNN at C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v9.0\lib/x64, C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v9.0\include
-- Detected CUDA at C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v9.0
-- Not using NCCL
-- Building without distributed package
-- Not using NNPACK
error: [Errno 2] No such file or directory: 'torch/lib/tmp_install/share/ATen/Declarations.yaml'
D:\pytorch\pytorch>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment