This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
from theano import function, config, shared, sandbox | |
import theano.tensor as T | |
import numpy | |
import time | |
from datetime import datetime | |
from optparse import OptionParser | |
import os | |
parser = OptionParser() |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
from theano import function, config, shared, sandbox | |
import theano.tensor as T | |
import numpy | |
import time | |
from datetime import datetime | |
from optparse import OptionParser | |
import os |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
[support code lies above, omitted here] | |
608 ////////////////////// | |
609 //// Functions | |
610 ////////////////////// | |
611 static PyObject * instantiate(PyObject * self, PyObject *argtuple) { | |
612 assert(PyTuple_Check(argtuple)); | |
613 if (3 != PyTuple_Size(argtuple)){ | |
614 PyErr_Format(PyExc_TypeError, "Wrong number of arguments, expected 3, got %i", (int)PyTuple_Size(argtuple)); | |
615 return NULL; |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Run on 2013-03-25 12:46:29.868096 | |
HostFromGpu [@A] <TensorType(float32, vector)> '' 1 | |
|GpuElemwise{exp,no_inplace} [@B] <CudaNdarrayType(float32, vector)> '' 0 | |
|<CudaNdarrayType(float32, vector)> [@C] <CudaNdarrayType(float32, vector)> | |
[GpuElemwise{exp,no_inplace}(<CudaNdarrayType(float32, vector)>), HostFromGpu(GpuElemwise{exp,no_inplace}.0)] | |
Looping 1000 times took 4.84187316895 seconds | |
Result is [ 1.23178029 1.61879349 1.52278066 ..., 1.1295259 2.35500026 | |
2.58820248] | |
Used the gpu |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Run on 2013-03-25 12:50:32.578073 | |
HostFromGpu [@A] <TensorType(float32, vector)> '' 1 | |
|GpuElemwise{exp,no_inplace} [@B] <CudaNdarrayType(float32, vector)> '' 0 | |
|<CudaNdarrayType(float32, vector)> [@C] <CudaNdarrayType(float32, vector)> | |
[GpuElemwise{exp,no_inplace}(<CudaNdarrayType(float32, vector)>), HostFromGpu(GpuElemwise{exp,no_inplace}.0)] | |
Looping 1000 times took 4.55282902718 seconds | |
Result is [ 1.23178029 1.61879349 1.52278066 ..., 1.1295259 2.35500026 | |
2.58820248] | |
Used the gpu |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Run on 2013-03-25 12:46:57.465917 | |
GpuElemwise{exp,no_inplace} [@A] <CudaNdarrayType(float32, vector)> '' 0 | |
|<CudaNdarrayType(float32, vector)> [@B] <CudaNdarrayType(float32, vector)> | |
Looping 1000 times took 1.00501489639 seconds | |
Result is <CudaNdarray object at 0x61c8df0> | |
Numpy result is [ 1.23178029 1.61879349 1.52278066 ..., 1.1295259 2.35500026 | |
2.58820248] | |
Used the gpu |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Run on 2013-03-25 12:50:38.972060 | |
GpuElemwise{exp,no_inplace} [@A] <CudaNdarrayType(float32, vector)> '' 0 | |
|<CudaNdarrayType(float32, vector)> [@B] <CudaNdarrayType(float32, vector)> | |
Looping 1000 times took 0.970016956329 seconds | |
Result is <CudaNdarray object at 0x55b8bf0> | |
Numpy result is [ 1.23178029 1.61879349 1.52278066 ..., 1.1295259 2.35500026 | |
2.58820248] | |
Used the gpu |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
In [1]: import theano.tensor as T | |
Using gpu device 0: Tesla M2070 | |
In [2]: W = T.dmatrix('W') | |
In [3]: V = T.dmatrix('V') | |
In [4]: x = T.dvector('x') | |
In [5]: y = T.dot(x,W) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
## my link line: ldflags = -L/scinet/gpc/intel/ics/composer_xe_2011_sp1.9.293/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -lmkl_scalapack_lp64 -lpthread -lm | |
In [1]: import theano.tensor as T | |
Using gpu device 0: Tesla M2070 | |
In [2]: W = T.dmatrix("W") | |
In [3]: V = T.dmatrix("V") | |
In [4]: x = T.dvector('x') |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Run on 2013-05-21 18:38:36.893592 | |
Pre-training layer 0, epoch 0, cost 469.74511106 | |
Pre-training layer 0, epoch 1, cost 444.779148771 | |
Pre-training layer 0, epoch 2, cost 439.117536989 | |
Pre-training layer 0, epoch 3, cost 435.99338236 | |
Pre-training layer 0, epoch 4, cost 433.863444521 | |
Pre-training layer 1, epoch 0, cost 388.678039978 | |
Pre-training layer 1, epoch 1, cost 319.358514327 | |
Pre-training layer 1, epoch 2, cost 302.30478766 | |
Pre-training layer 1, epoch 3, cost 293.561204112 |
OlderNewer