Skip to content

Instantly share code, notes, and snippets.

desc: (none)
cmd: ./target/release/awc-mem-leak
time_unit: i
#-----------
snapshot=0
#-----------
time=0
mem_heap_B=0
mem_heap_extra_B=0
mem_stacks_B=0
### Keybase proof
I hereby claim:
* I am ajsyp on github.
* I am ajsyp (https://keybase.io/ajsyp) on keybase.
* I have a public key ASB-CygiByvEokE-xOP4FwUVwEO1_EUnzVedfqRDfCk7pAo
To claim this, I am signing this object:
@ajsyp
ajsyp / more-traceback.txt
Created March 9, 2017 15:02
More output for multi-GPU error on Kur
$ python parallel_bug.py
Using TensorFlow backend.
get_output_shape_for((None, 32))
self.name: dense_1
self.input_dim: 32
self.output_dim: 100
id(self): 140287380209560
get_output_shape_for((None, 32))
self.name: dense_1
self.input_dim: 32
@ajsyp
ajsyp / traceback.txt
Created March 9, 2017 14:59
Error with multi-GPU TensorFlow on Kur
$ python parallel_bug.py
Using TensorFlow backend.
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_1 (InputLayer) (None, 32, 32) 0
____________________________________________________________________________________________________
timedistributed_1 (TimeDistribut (None, 32, 100) 3300 input_1[0][0]
____________________________________________________________________________________________________
lstm_1 (LSTM) (None, 32, 50) 30200 timedistributed_1[0][0]
@ajsyp
ajsyp / parallel_bug.py
Created March 9, 2017 14:56
Recreates the bug with multi-GPU TensorFlow in Kur
import keras.models as M
import keras.layers as L
from kur.utils.parallelism import make_parallel
# Pretend to have some 32 x 32 images.
input = x = L.Input(shape=(32, 32))
# Shape: (samples, 32, 32)
x = L.TimeDistributed(