Skip to content

Instantly share code, notes, and snippets.

@ajsyp
Created March 9, 2017 15:02
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save ajsyp/00d16df9b664c0ecac460ca787e3ec4d to your computer and use it in GitHub Desktop.
Save ajsyp/00d16df9b664c0ecac460ca787e3ec4d to your computer and use it in GitHub Desktop.
More output for multi-GPU error on Kur
$ python parallel_bug.py
Using TensorFlow backend.
get_output_shape_for((None, 32))
self.name: dense_1
self.input_dim: 32
self.output_dim: 100
id(self): 140287380209560
get_output_shape_for((None, 32))
self.name: dense_1
self.input_dim: 32
self.output_dim: 100
id(self): 140287380209560
get_output_shape_for((None, 50))
self.name: dense_2
self.input_dim: 50
self.output_dim: 20
id(self): 140287379155432
get_output_shape_for((None, 50))
self.name: dense_2
self.input_dim: 50
self.output_dim: 20
id(self): 140287379155432
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_1 (InputLayer) (None, 32, 32) 0
____________________________________________________________________________________________________
timedistributed_1 (TimeDistribut (None, 32, 100) 3300 input_1[0][0]
____________________________________________________________________________________________________
lstm_1 (LSTM) (None, 32, 50) 30200 timedistributed_1[0][0]
____________________________________________________________________________________________________
timedistributed_2 (TimeDistribut (None, 32, 20) 1020 lstm_1[0][0]
====================================================================================================
Total params: 34,520
Trainable params: 34,520
Non-trainable params: 0
____________________________________________________________________________________________________
get_output_shape_for((None, 32))
self.name: dense_1
self.input_dim: 32
self.output_dim: 100
id(self): 140287380209560
get_output_shape_for((32,))
self.name: dense_1
self.input_dim: 32
self.output_dim: 100
id(self): 140287380209560
Traceback (most recent call last):
File "parallel_bug.py", line 31, in <module>
model = make_parallel(model, 1)
File "/home/ubuntu/projects/mgpu/kur/kur/utils/parallelism.py", line 66, in make_parallel
outputs = model(inputs)
File "/home/ubuntu/projects/mgpu/keras/keras/engine/topology.py", line 572, in __call__
self.add_inbound_node(inbound_layers, node_indices, tensor_indices)
File "/home/ubuntu/projects/mgpu/keras/keras/engine/topology.py", line 635, in add_inbound_node
Node.create_node(self, inbound_layers, node_indices, tensor_indices)
File "/home/ubuntu/projects/mgpu/keras/keras/engine/topology.py", line 166, in create_node
output_tensors = to_list(outbound_layer.call(input_tensors[0], mask=input_masks[0]))
File "/home/ubuntu/projects/mgpu/keras/keras/engine/topology.py", line 2247, in call
output_tensors, output_masks, output_shapes = self.run_internal_graph(inputs, masks)
File "/home/ubuntu/projects/mgpu/keras/keras/engine/topology.py", line 2420, in run_internal_graph
shapes = to_list(layer.get_output_shape_for(computed_tensors[0]._keras_shape))
File "/home/ubuntu/projects/mgpu/keras/keras/layers/wrappers.py", line 100, in get_output_shape_for
child_output_shape = self.layer.get_output_shape_for(child_input_shape)
File "/home/ubuntu/projects/mgpu/keras/keras/layers/core.py", line 825, in get_output_shape_for
assert input_shape and len(input_shape) >= 2
AssertionError
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment