Skip to content

Instantly share code, notes, and snippets.

@benanne
benanne / gist:ae2a7adaab133c61a059
Created January 28, 2015 13:28
Inception module in Lasagne (without 3x3s1 pooling)
import lasagne as nn
Conv2DLayer = nn.layers.Conv2DDNNLayer
def inception_module(l_in, num_1x1, reduce_3x3, num_3x3, reduce_5x5, num_5x5, gain=1.0, bias=0.1):
"""
inception module (without the 3x3s1 pooling and projection because that's difficult in Theano right now)
"""
shape = l_in.get_output_shape()
out_layers = []
@benanne
benanne / gist:2678b37f5befe2773f3b
Created January 10, 2015 10:42
test_dnn.py output (R1)
sedielem@koe:~/git/Theano/theano/sandbox/cuda/tests$ THEANO_FLAGS=warn_float64=ignore nosetests test_dnn.py
Using gpu device 1: Tesla K40c
..........
----------------------------------------------------------------------
Ran 10 tests in 218.306s
OK
@benanne
benanne / gist:0b87a97d39dd552ed30a
Last active August 29, 2015 14:13
test_dnn.py output
sedielem@koe:~/git/Theano/theano/sandbox/cuda/tests$ nosetests test_dnn.py
Using gpu device 1: Tesla K40c
EEEEEESS.S
======================================================================
ERROR: test_conv (theano.sandbox.cuda.tests.test_dnn.TestDnnInferShapes)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/sedielem/git/Theano/theano/sandbox/cuda/tests/test_dnn.py", line 259, in test_conv
dnn.GpuDnnConv
File "/home/sedielem/git/Theano/theano/tests/unittest_tools.py", line 237, in _compile_and_check
@benanne
benanne / gist:348c77b112c79f0ce864
Created January 4, 2015 17:35
Theano flipping wtf
In [1]: import theano
Using gpu device 0: GeForce GT 540M
In [2]: import theano.tensor as T
In [3]: y = T.tensor4('y')
In [4]: f = theano.function([y], y)
In [5]: theano.printing.debugprint(y)
@benanne
benanne / gist:44e05db6537d0b6b9ba2
Created December 4, 2014 15:18
galaxy challenge constraints layer
class GalaxyDivNormLayer(nntools.layers.Layer):
"""
rectification + divisive normalization
"""
def __init__(self, input_layer):
super(GalaxyDivNormLayer, self).__init__(input_layer)
self.question_slices = [slice(0, 3), slice(3, 5), slice(5, 7), slice(7, 9), slice(9, 13), slice(13, 15),
slice(15, 18), slice(18, 25), slice(25, 28), slice(28, 31), slice(31, 37)]
@benanne
benanne / gist:02c1dbafe966d2736cf4
Created September 26, 2014 20:58
Running a (slow) generator in a separate process
import multiprocessing as mp
def buffered_gen_mp(source_gen, buffer_size=2):
"""
Generator that runs a slow source generator in a separate process.
buffer_size: the maximal number of items to pre-generate (length of the buffer)
"""
if buffer_size < 2:
raise RuntimeError("Minimal buffer size is 2!")
@benanne
benanne / gist:34176e4d1abd56933bbf
Created September 24, 2014 13:59
Theano error CUDA 6.5 + GeForce GTX 480
In [1]: import theano
iUsing gpu device 0: GeForce GTX 480
m
In [2]: import theano.tensor as T
In [3]: x, y = T.matrices('x', 'y')
In [4]: prod = T.dot(x, y)
In [5]: f = theano.function([x,y], prod)
@benanne
benanne / gist:2f3b90a8eb2649082541
Created August 27, 2014 15:15
cublasSgemm bug, CUDA 6.5
* GeForce GTX 780Ti "Superclocked"
* drivers 340.24
* CUDA 6.5
sander@sander-precision:~/tmp/schluter/Theano$ for x in full valid subsample grads; do cuda-memcheck nosetests theano/sandbox/cuda/tests/test_conv_cuda_ndarray.py:test_gemm_$x; done
========= CUDA-MEMCHECK
Using gpu device 0: GeForce GTX 780 Ti
.
----------------------------------------------------------------------
@benanne
benanne / gist:4128e5998122295f992b
Created August 27, 2014 15:13
cublasSgemm bug, CUDA 6.0
* GeForce GTX 780Ti "Superclocked"
* drivers 340.24
* CUDA 6.0
sander@sander-precision:~/tmp/schluter/Theano$ for x in full valid subsample grads; do cuda-memcheck nosetests theano/sandbox/cuda/tests/test_conv_cuda_ndarray.py:test_gemm_$x; done
========= CUDA-MEMCHECK
Using gpu device 0: GeForce GTX 780 Ti
========= Invalid __global__ read of size 4
========= at 0x000000e0 in sgemm_sm_heavy_nt_ldg
@benanne
benanne / convolutional_mlp_fft.py
Created June 6, 2014 14:40
convnet from the deep learning tutorials with conv2d_fft
"""This tutorial introduces the LeNet5 neural network architecture
using Theano. LeNet5 is a convolutional neural network, good for
classifying images. This tutorial shows how to build the architecture,
and comes with all the hyper-parameters you need to reproduce the
paper's MNIST results.
This implementation simplifies the model in the following ways:
- LeNetConvPool doesn't implement location-specific gain and bias parameters