Skip to content

Instantly share code, notes, and snippets.

View rizar's full-sized avatar

Dzmitry Bahdanau rizar

  • ServiceNow Research
View GitHub Profile
@rizar
rizar / alphas.py
Created September 16, 2018 17:42
the code I used to plot alphas
def load_alpha_data(log_file):
data = [[], [], []]
with open(log_file) as log:
for line in log:
if line.startswith('data'):
num = int(line[5])
row = [float(x) for x in line[7:].split()]
data[num].append(row)
return data
@rizar
rizar / error.py
Created April 2, 2018 02:55
when I install pydmrs with pip
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-2-84c8d19d2bb4> in <module>()
----> 1 from shapeworld import Dataset
2
3 dataset = Dataset.create(dtype='agreement', name='multishape')
4 generated = dataset.generate(n=128, mode='train', include_model=True)
5
~/Dist/ShapeWorld/shapeworld/__init__.py in <module>()
----> 1 from shapeworld.dataset import Dataset
@rizar
rizar / error.py
Created April 2, 2018 02:53
with github version
Traceback (most recent call last):
File "/u/bahdanau/.conda/envs/py36clone/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2910, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-2-84c8d19d2bb4>", line 1, in <module>
from shapeworld import Dataset
File "/u/bahdanau/Dist/ShapeWorld/shapeworld/__init__.py", line 1, in <module>
from shapeworld.dataset import Dataset
@rizar
rizar / predict.txt
Last active March 30, 2018 17:17
Predict question words from FiLM coefficients
Question: "Is there a cyan shiny thing of the same size as the block?"
No
Is there a cyan shiny thing of the same size as the block? no
FiLM layer 0
(size, 6.938) (the, 5.774) (same, 3.317) (there, 3.068) (as, 3.067) (cyan, 2.567) (Is, 0.406)
FiLM layer 1
(the, 6.826) (same, 4.392) (as, 4.130) (size, 3.484) (cyan, 2.280) (there, 2.038) (block, 0.408) (that, 0.262) (a, 0.135) (any, 0.112)
FiLM layer 2
(the, 9.060) (size, 4.601) (as, 3.907) (same, 2.995) (cyan, 1.927) (there, 1.653) (of, 0.022)
FiLM layer 3
@rizar
rizar / log.txt
Created May 7, 2017 18:01
Log of my code when run with float16
Using cuDNN version 5105 on context None
Preallocating 14677/16308 Mb (0.900000) on cuda
Mapped name None to device cuda: Tesla P100-SXM2-16GB (0000:0A:00.0)
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcurand.so locally
/Tmp/lisa/os_v5/anaconda/lib/python2.7/site-packages/matplotlib/__init__.py:913: UserWarning: axes.color_cycle is deprecated and replaced with axes.prop_cycle; please use the latter.
warnings.warn(self.msg_depr % (key, alt_key))
@rizar
rizar / profiling_is_slow.py
Created November 30, 2016 15:12
Huge overhead of Tensorflow profiling
import tensorflow as tf
import numpy
import time
# The computation graph (in fact just an LSTM language model)
batch_size = 100
vocab_size = 50000
dim = 512
inputs = tf.placeholder(tf.int32, [batch_size, None],
@rizar
rizar / powers_of_2.py
Created November 29, 2016 23:07
Accessing previous states with Tensorflow
init_step = tf.constant(1)
init_states = tf.TensorArray(tf.float32, size=n_steps, clear_after_read=False)
init_states = init_states.write(0, tf.ones((128, 128)))
def should_continue(step, states):
return step < n_steps
def iteration(step, states):
previous_states = states.gather(tf.range(step))
return step + 1, states.write(step, tf.reduce_sum(previous_states, 0) + 1)
_, states = tf.while_loop(should_continue, iteration, [init_step, init_states])
@rizar
rizar / new.prof
Created May 6, 2016 15:30
New profile
Function profiling
==================
Message: /u/bahdanau/Dist/fully-neural-lvsr/libs/blocks/blocks/monitoring/evaluators.py:181
Time in 100 calls to Function.__call__: 2.154827e-03s
Time in Function.fn.__call__: 9.248257e-04s (42.919%)
Total compile time: 4.125585e+00s
Number of Apply nodes: 0
Theano Optimizer time: 6.079912e-03s
Theano validate time: 0.000000e+00s
Theano Linker time (includes C, CUDA code generation/compiling): 9.608269e-05s
@rizar
rizar / old.prof
Created May 6, 2016 15:29
Old profile
Function profiling
==================
Message: /u/bahdanau/Dist/fully-neural-lvsr/libs/blocks/blocks/monitoring/evaluators.py:181
Time in 100 calls to Function.__call__: 1.984119e-03s
Time in Function.fn.__call__: 8.468628e-04s (42.682%)
Total compile time: 5.483155e+00s
Number of Apply nodes: 0
Theano Optimizer time: 1.670289e-02s
Theano validate time: 0.000000e+00s
Theano Linker time (includes C, CUDA code generation/compiling): 2.310276e-04s
@rizar
rizar / lstm.diff
Created April 1, 2016 20:54
Patch required to make reverse_words work with LSTM
diff --git a/reverse_words/__init__.py b/reverse_words/__init__.py
index b649ab5..ac296e7 100644
--- a/reverse_words/__init__.py
+++ b/reverse_words/__init__.py
@@ -14,7 +14,7 @@ from theano import tensor
from blocks.bricks import Tanh, Initializable
from blocks.bricks.base import application
from blocks.bricks.lookup import LookupTable
-from blocks.bricks.recurrent import SimpleRecurrent, Bidirectional
+from blocks.bricks.recurrent import SimpleRecurrent, LSTM, Bidirectional