Skip to content

Instantly share code, notes, and snippets.

🎯
Focusing

Dominique Luna domluna

🎯
Focusing
Block or report user

Report or block domluna

Hide content and notifications from this user.

Learn more about blocking users

Contact Support about this user’s behavior.

Learn more about reporting abuse

Report abuse
View GitHub Profile
@domluna
domluna / out.jl
Created Apr 8, 2019
Formatted cppwrapper.jl using JLFmt
View out.jl
struct CPolygon
vertexlist::Ptr{Cint}
numberofvertices::Cint
end
struct CFacet{T}
polygonlist::Ptr{CPolygon}
numberofpolygons::Cint
@domluna
domluna / utils.jl
Created Oct 25, 2018
Formatted utils.jl
View utils.jl
export @esc, isexpr, isline, rmlines, unblock, block, inexpr, namify, isdef,
longdef, shortdef, @expand, makeif, prettify, splitdef, splitarg
"""
assoc!(d, k, v)
is the same as `d[k] = v` but returns `d` rather than `v`.
"""
assoc!(d, k, v) = (d[k] = v; d)
@domluna
domluna / attention_transformer.md
Created Sep 25, 2018
Notes about attention and transformer
View attention_transformer.md

Transformer notes

  • current models have trouble learning dependencies over distance (i.e. between characters/words), # ops scale O(n) or O(log n).

  • transformer is O(1) in number of ops

  • encoder-decoder with residual conns. Encoder/decodes feed into themselves N times.

  • We also modify the self-attention sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This masking, combined with fact that the output embeddings are offset by one position, **ensures that the predictions for position i can depend only on the known outputs at positions less than i **.

def subsequent_mask(size):
View conv_transpose.py
import tensorflow as tf
import numpy as np
x = tf.constant(np.random.randn(1, 4, 4, 2), dtype=tf.float32)
# TODO: Use `tf.layers.conv2d_transpose` to return a tensor
# with the shape (1, 8, 8, 5)
conv = 0
with tf.Session() as sess:
View iou.py
def iou(img, y, c):
intersection = 0.
union = 0.
img = img.reshape(-1)
y = y.reshape(-1)
for i in range(len(img)):
intersection += img[i] == c and y[i] == c
union += img[i] == c or y[i] == c
return intersection / union
View load.py
"""
Load SavedModel
Output graphdef and checkpoint files
"""
import tensorflow as tf
import argparse
import sys
@domluna
domluna / steps.sh
Created May 27, 2017 — forked from albertstartup/steps.sh
aws gpu, ubuntu 16.04, nvidia driver 367, cuda 8,
View steps.sh
# Required downloads:
# NVIDIA-Linux-x86_64-367.27.run
# cuda_8.0.27_linux.run
# cudnn-8.0-linux-x64-v5.0-ga.tgz
sudo apt-get install build-essential
sudo apt-get install linux-image-extra-`uname -r`
sudo ./NVIDIA-Linux-x86_64-367.27.run
./cuda_8.0.27_linux.run --extract=`pwd`/extracts
sudo ./extracts/cuda-linux64-rel-8.0.27-20733550.run
View foo.py
import tensorflow as tf
import numpy as np
w = np.arange(1, 10, dtype=np.float32).reshape((3,3,1,1))
f = tf.Variable(tf.constant(w))
input = tf.placeholder(tf.float32, (None, 28, 28, 1))
conv = tf.nn.conv2d(input, f, [1,2,2,1], 'SAME')
s = tf.Session()
x = np.zeros((28, 28), dtype=np.float32)
View foo.py
import tensorflow as tf
import numpy as np
w = np.arange(1, 10, dtype=np.float32).reshape((3,3,1,1))
f = tf.Variable(tf.constant(w))
input = tf.placeholder(tf.float32, (None, 28, 28, 1))
conv = tf.nn.conv2d(input, f, [1,2,2,1], 'VALID')
s = tf.Session()
x = np.zeros((28, 28), dtype=np.float32)
@domluna
domluna / env.yml
Last active Nov 10, 2016
CarND Term 1 environment sample
View env.yml
name: CarND-Term1
channels:
- https://conda.anaconda.org/menpo
dependencies:
- python==3.5.2
- numpy
- matplotlib
- tensorflow
- jupyter
- opencv3
You can’t perform that action at this time.