Skip to content

Instantly share code, notes, and snippets.

View domluna's full-sized avatar
🐺
howling time

Dominique Luna domluna

🐺
howling time
View GitHub Profile
@domluna
domluna / .zshrc
Created May 6, 2016 02:55
new zsh config with zgen
# User configuration
export PATH="$HOME/.linuxbrew/bin:$PATH"
export MANPATH="$HOME/.linuxbrew/share/man:$MANPATH"
export INFOPATH="$HOME/.linuxbrew/share/info:$INFOPATH"
# add cuda tools to command path
export PATH=/usr/local/cuda/bin:${PATH}
export MANPATH=/usr/local/cuda/man:${MANPATH}
# add cuda libraries to library path
@domluna
domluna / out.jl
Created April 8, 2019 03:04
Formatted cppwrapper.jl using JLFmt
struct CPolygon
vertexlist::Ptr{Cint}
numberofvertices::Cint
end
struct CFacet{T}
polygonlist::Ptr{CPolygon}
numberofpolygons::Cint
@domluna
domluna / utils.jl
Created October 25, 2018 03:26
Formatted utils.jl
export @esc, isexpr, isline, rmlines, unblock, block, inexpr, namify, isdef,
longdef, shortdef, @expand, makeif, prettify, splitdef, splitarg
"""
assoc!(d, k, v)
is the same as `d[k] = v` but returns `d` rather than `v`.
"""
assoc!(d, k, v) = (d[k] = v; d)
@domluna
domluna / attention_transformer.md
Created September 25, 2018 00:57
Notes about attention and transformer

Transformer notes

  • current models have trouble learning dependencies over distance (i.e. between characters/words), # ops scale O(n) or O(log n).

  • transformer is O(1) in number of ops

  • encoder-decoder with residual conns. Encoder/decodes feed into themselves N times.

  • We also modify the self-attention sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This masking, combined with fact that the output embeddings are offset by one position, **ensures that the predictions for position i can depend only on the known outputs at positions less than i **.

def subsequent_mask(size):
@domluna
domluna / foo.py
Last active April 12, 2018 23:37
Print the size in bytes of a Tensorflow GraphDef
import tensorflow as tf
def main():
files = ['model-' + str(x) for x in range(20000, 70000, 10000)]
for f in files:
with tf.Graph().as_default():
sess = tf.Session()
saver = tf.train.import_meta_graph('./test_graph_size/' + f + '.meta')
saver.restore(sess, './test_graph_size/' + f)
print(sess.graph_def.ByteSize())
@domluna
domluna / README.md
Last active February 28, 2018 12:08
Vanilla policy gradient, no baseline

Run with defaults

python vpg.py

@domluna
domluna / README
Last active July 14, 2017 00:15
Proportional Cross-Entropy Method on CartPole-v0
To replicate run
python cem.py --algorithm pcem --outdir CartPole-v0-pcem
The --outdir argument is optional. If left as is results will be written to /tmp/CartPole-v0-pcem.
import tensorflow as tf
import numpy as np
x = tf.constant(np.random.randn(1, 4, 4, 2), dtype=tf.float32)
# TODO: Use `tf.layers.conv2d_transpose` to return a tensor
# with the shape (1, 8, 8, 5)
conv = 0
with tf.Session() as sess:
def iou(img, y, c):
intersection = 0.
union = 0.
img = img.reshape(-1)
y = y.reshape(-1)
for i in range(len(img)):
intersection += img[i] == c and y[i] == c
union += img[i] == c or y[i] == c
return intersection / union
"""
Load SavedModel
Output graphdef and checkpoint files
"""
import tensorflow as tf
import argparse
import sys