Skip to content

Instantly share code, notes, and snippets.

@kingtaurus
Created November 30, 2016 03:39
Show Gist options
  • Save kingtaurus/3f75835b33501ba51be7fccb0fb0ab8e to your computer and use it in GitHub Desktop.
Save kingtaurus/3f75835b33501ba51be7fccb0fb0ab8e to your computer and use it in GitHub Desktop.
MNIST Error (autoencoder)
In [10]: %run CNN_autoencoder.py
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stre
am_executor\dso_loader.cc:128] successfully opened CUDA library cublas64_80.dll
locally
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stre
am_executor\dso_loader.cc:119] Couldn't open CUDA library cudnn64_5.dll
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stre
am_executor\cuda\cuda_dnn.cc:3459] Unable to load cuDNN DSO
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stre
am_executor\dso_loader.cc:128] successfully opened CUDA library cufft64_80.dll l
ocally
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stre
am_executor\dso_loader.cc:128] successfully opened CUDA library nvcuda.dll local
ly
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stre
am_executor\dso_loader.cc:128] successfully opened CUDA library curand64_80.dll
locally
Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core
\common_runtime\gpu\gpu_device.cc:885] Found device 0 with properties:
name: GeForce GTX 965M
major: 5 minor: 2 memoryClockRate (GHz) 0.9495
pciBusID 0000:01:00.0
Total memory: 2.00GiB
Free memory: 1.86GiB
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core
\common_runtime\gpu\gpu_device.cc:906] DMA: 0
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core
\common_runtime\gpu\gpu_device.cc:916] 0: Y
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core
\common_runtime\gpu\gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (d
evice: 0, name: GeForce GTX 965M, pci bus id: 0000:01:00.0)
E c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core
\common_runtime\gpu\gpu_device.cc:586] Could not identify NUMA node of /job:loca
lhost/replica:0/task:0/gpu:0, defaulting to 0. Your kernel may not have been bu
ilt with NUMA support.
WARNING:tensorflow:From C:\Users\Gregoty\Programming\cs231n\repo\project\tensorf
low\autoencoder\CNN_autoencoder.py:188 in <module>.: initialize_all_variables (f
rom tensorflow.python.ops.variables) is deprecated and will be removed after 201
7-03-02.
Instructions for updating:
Use `tf.global_variables_initializer` instead.
number of test = 10000
number of train = 55000
number_of validation = 5000
Done splitting up test data set;
Starting training loop.
Epoch: 0
Shuffling the training data;
F c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stre
am_executor\cuda\cuda_dnn.cc:221] Check failed: s.ok() could not find cudnnCreat
e in cudnn DSO; dlerror: cudnnCreate not found
@qunash
Copy link

qunash commented Dec 15, 2016

Getting same error. Have you resolved this problem?

EDIT: Solved this problem by downloading CUDNN from here

@sachag678
Copy link

I downloaded CUDNN from NVIDIA but I am still having this same error. I added the bin folder to the PATH env variable, but still same issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment