Skip to content

Instantly share code, notes, and snippets.

@MrLebovsky
MrLebovsky / outputs.txt
Created March 23, 2020 01:28
Torch build outputs
[2212/3130] Linking CXX shared library bin\torch_cpu.dll
LINK : The 32-bit linker (C: \ PROGRA ~ 2 \ MICROS ~ 1 \ 2017 \ COMMUN ~ 1 \ VC \ Tools \ MSVC \ 1411 ~ 1.255 \ bin \ HostX86 \ x64 \ link.exe) does not have enough heap size; the build is restarted with the 64-bit linker (C: \ Program Files (x86) \ Microsoft Visual Studio \ 2017 \ Community \ VC \ Tools \ MSVC \ 11/14/25503 \ bin \ HostX64 \ x64 \ link.exe)
[2213/3130] Building NVCC (Device) object caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/THC/torch_cuda_generated_THCSleep.cu.obj
THCSleep.cu
THCSleep.cu
[2214/3130] Building NVCC (Device) object caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/THC/torch_cuda_generated_THCBlas.cu.obj
THCBlas.cu
THCBlas.cu
[2215/3130] Building NVCC (Device) object caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/THC/torch_cuda_generated_THCReduceApplyUtils.cu.obj
THCReduceApplyUtils.cu
[2332/3127] Building NVCC (Device) object caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/torch_cuda_generated_MaxUnpooling.cu.obj
MaxUnpooling.cu
MaxUnpooling.cu
[2333/3127] Building NVCC (Device) object caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/torch_cuda_generated_Indexing.cu.obj
Indexing.cu
Indexing.cu
[2334/3127] Building NVCC (Device) object caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/torch_cuda_generated_Loss.cu.obj
FAILED: caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/torch_cuda_generated_Loss.cu.obj
cmd.exe /C "cd /D E:\PyTorch\pytorch\build\caffe2\CMakeFiles\torch_cuda.dir\__\aten\src\ATen\native\cuda && "C:\Program Files\CMake\bin\cmake.exe" -E make_directory E:/PyTorch/pytorch/build/caffe2/CMakeFiles/torch_cuda.
dir/__/aten/src/ATen/native/cuda/. && "C:\Program Files\CMake\bin\cmake.exe" -D verbose:BOOL=OFF -D build_configuration:STRING=Release -D generated_file:STRING=E:/PyTorch/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/_
Ninja found, using it to speed up builds
MAGMA_HOME is set. MAGMA will be included in build.
The flags after configuration:
NO_CUDA=
CMAKE_GENERATOR=Ninja
CUDA_PATH=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0
DISTUTILS_USE_SDK=1
Do you wish to continue? (Y/N)
Type input: Y
**********************************************************************
#include <linux/init.h>
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/linkage.h>
#include <linux/moduleloader.h>
#include <linux/fs.h>
#include <linux/dcache.h>
#include <linux/atomic.h>
#include <linux/slab.h>
#include <linux/scatterlist.h>
@MrLebovsky
MrLebovsky / min-char-rnn.py
Created November 23, 2018 14:16
Minimal character-level Vanilla RNN model. Written by Andrej Karpathy (@karpathy)
"""
Minimal character-level Vanilla RNN model. Written by Andrej Karpathy (@karpathy)
BSD License
"""
import numpy as np
import multiprocessing
from time import sleep
from threading import Lock, Thread