Skip to content

Instantly share code, notes, and snippets.

View lzamparo's full-sized avatar

Lee Zamparo lzamparo

View GitHub Profile
@lzamparo
lzamparo / gist:5225270
Created March 22, 2013 22:31
Theano GPU test script on scinet, compute exp.
from theano import function, config, shared, sandbox
import theano.tensor as T
import numpy
import time
from datetime import datetime
from optparse import OptionParser
import os
parser = OptionParser()
@lzamparo
lzamparo / gpu_from_host.py
Created March 22, 2013 22:35
Theano test script, but computing exp on the GPU using gpu_from_host to wrap T.exp()
from theano import function, config, shared, sandbox
import theano.tensor as T
import numpy
import time
from datetime import datetime
from optparse import OptionParser
import os
@lzamparo
lzamparo / crash_output.txt
Created March 22, 2013 22:46
Error output for my first theano exp test script, when submitted to a compute node on SciNet.
[support code lies above, omitted here]
608 //////////////////////
609 //// Functions
610 //////////////////////
611 static PyObject * instantiate(PyObject * self, PyObject *argtuple) {
612 assert(PyTuple_Check(argtuple));
613 if (3 != PyTuple_Size(argtuple)){
614 PyErr_Format(PyExc_TypeError, "Wrong number of arguments, expected 3, got %i", (int)PyTuple_Size(argtuple));
615 return NULL;
@lzamparo
lzamparo / exp_test_interactive_debug.txt
Last active December 15, 2015 09:39
Output of theano_exp_test.py script run on the head node interactively. Altered so that the graph is printed using printing.debugprint(f, file=output_file,print_type=True)
Run on 2013-03-25 12:46:29.868096
HostFromGpu [@A] <TensorType(float32, vector)> '' 1
|GpuElemwise{exp,no_inplace} [@B] <CudaNdarrayType(float32, vector)> '' 0
|<CudaNdarrayType(float32, vector)> [@C] <CudaNdarrayType(float32, vector)>
[GpuElemwise{exp,no_inplace}(<CudaNdarrayType(float32, vector)>), HostFromGpu(GpuElemwise{exp,no_inplace}.0)]
Looping 1000 times took 4.84187316895 seconds
Result is [ 1.23178029 1.61879349 1.52278066 ..., 1.1295259 2.35500026
2.58820248]
Used the gpu
@lzamparo
lzamparo / exp_test_compute_job_debug.txt
Created March 25, 2013 19:26
Output of theano_exp_test_debug.py when submitted to the queue and run on a compute node.
Run on 2013-03-25 12:50:32.578073
HostFromGpu [@A] <TensorType(float32, vector)> '' 1
|GpuElemwise{exp,no_inplace} [@B] <CudaNdarrayType(float32, vector)> '' 0
|<CudaNdarrayType(float32, vector)> [@C] <CudaNdarrayType(float32, vector)>
[GpuElemwise{exp,no_inplace}(<CudaNdarrayType(float32, vector)>), HostFromGpu(GpuElemwise{exp,no_inplace}.0)]
Looping 1000 times took 4.55282902718 seconds
Result is [ 1.23178029 1.61879349 1.52278066 ..., 1.1295259 2.35500026
2.58820248]
Used the gpu
@lzamparo
lzamparo / gpu_from_host_interactive.txt
Created March 25, 2013 19:38
gpu_from_host_test.py debugging output run interactively on the head node
Run on 2013-03-25 12:46:57.465917
GpuElemwise{exp,no_inplace} [@A] <CudaNdarrayType(float32, vector)> '' 0
|<CudaNdarrayType(float32, vector)> [@B] <CudaNdarrayType(float32, vector)>
Looping 1000 times took 1.00501489639 seconds
Result is <CudaNdarray object at 0x61c8df0>
Numpy result is [ 1.23178029 1.61879349 1.52278066 ..., 1.1295259 2.35500026
2.58820248]
Used the gpu
@lzamparo
lzamparo / gpu_from_host_compute_job.txt
Created March 25, 2013 20:30
Output of gpu_from_host_test.py, with debugging info to print the function graph, when run as a job on a compute node
Run on 2013-03-25 12:50:38.972060
GpuElemwise{exp,no_inplace} [@A] <CudaNdarrayType(float32, vector)> '' 0
|<CudaNdarrayType(float32, vector)> [@B] <CudaNdarrayType(float32, vector)>
Looping 1000 times took 0.970016956329 seconds
Result is <CudaNdarray object at 0x55b8bf0>
Numpy result is [ 1.23178029 1.61879349 1.52278066 ..., 1.1295259 2.35500026
2.58820248]
Used the gpu
@lzamparo
lzamparo / theano_segfault.txt
Created April 3, 2013 16:08
Rop example segfault in call to theano.function on arc01
In [1]: import theano.tensor as T
Using gpu device 0: Tesla M2070
In [2]: W = T.dmatrix('W')
In [3]: V = T.dmatrix('V')
In [4]: x = T.dvector('x')
In [5]: y = T.dot(x,W)
@lzamparo
lzamparo / theano_Rop_exception.py
Created April 9, 2013 17:10
Still trying to get the Rop problem fixed, which might just be masking a linking error. Now I cannot compile the expression graph f = function([W,V,x], JV), I get a cryptic import error (see the tail of this gist).
## my link line: ldflags = -L/scinet/gpc/intel/ics/composer_xe_2011_sp1.9.293/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -lmkl_scalapack_lp64 -lpthread -lm
In [1]: import theano.tensor as T
Using gpu device 0: Tesla M2070
In [2]: W = T.dmatrix("W")
In [3]: V = T.dmatrix("V")
In [4]: x = T.dvector('x')
@lzamparo
lzamparo / sda_pretraining_test.txt
Created May 22, 2013 18:44
SdA pickling test script output. The first pre-training output statements show a consistent reduction in the reconstruction error. The second set (after unpickling) shows that there is some problem introduced after un-pickling that is manifest in the upper layers.
Run on 2013-05-21 18:38:36.893592
Pre-training layer 0, epoch 0, cost 469.74511106
Pre-training layer 0, epoch 1, cost 444.779148771
Pre-training layer 0, epoch 2, cost 439.117536989
Pre-training layer 0, epoch 3, cost 435.99338236
Pre-training layer 0, epoch 4, cost 433.863444521
Pre-training layer 1, epoch 0, cost 388.678039978
Pre-training layer 1, epoch 1, cost 319.358514327
Pre-training layer 1, epoch 2, cost 302.30478766
Pre-training layer 1, epoch 3, cost 293.561204112