Skip to content

Instantly share code, notes, and snippets.

View oarriaga's full-sized avatar
😵‍💫

Octavio Arriaga oarriaga

😵‍💫
View GitHub Profile
@oarriaga
oarriaga / raytracing.py
Created December 4, 2020 13:18 — forked from rossant/raytracing.py
Very simple ray tracing engine in (almost) pure Python. Depends on NumPy and Matplotlib. Diffuse and specular lighting, simple shadows, reflections, no refraction. Purely sequential algorithm, slow execution.
"""
MIT License
Copyright (c) 2017 Cyrille Rossant
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
@oarriaga
oarriaga / AttentionWithContext.py
Created August 29, 2017 15:37 — forked from cbaziotis/AttentionWithContext.py
Keras Layer that implements an Attention mechanism, with a context/query vector, for temporal data. Supports Masking. Follows the work of Yang et al. [https://www.cs.cmu.edu/~diyiy/docs/naacl16.pdf] "Hierarchical Attention Networks for Document Classification"
def dot_product(x, kernel):
"""
Wrapper for dot product operation, in order to be compatible with both
Theano and Tensorflow
Args:
x (): input
kernel (): weights
Returns:
"""
if K.backend() == 'tensorflow':
@oarriaga
oarriaga / lstm_reference.ipynb
Created August 14, 2017 14:16 — forked from Qwlouse/lstm_reference.ipynb
LSTM Reference Implementation in Python
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@oarriaga
oarriaga / guided_relu.py
Created June 20, 2017 14:46 — forked from falcondai/guided_relu.py
Tensorflow implementation of guided backpropagation through ReLU
import tensorflow as tf
from tensorflow.python.framework import ops
from tensorflow.python.ops import gen_nn_ops
@ops.RegisterGradient("GuidedRelu")
def _GuidedReluGrad(op, grad):
return tf.select(0. < grad, gen_nn_ops._relu_grad(grad, op.outputs[0]), tf.zeros(grad.get_shape()))
if __name__ == '__main__':
with tf.Session() as sess:
@oarriaga
oarriaga / spatial_transformer_network.py
Last active January 5, 2021 06:51
Implementation of Spatial Transformer Networks (https://arxiv.org/abs/1506.02025) in Keras 2.
from keras.layers.core import Layer
import keras.backend as K
if K.backend() == 'tensorflow':
import tensorflow as tf
def K_arange(start, stop=None, step=1, dtype='int32'):
result = tf.range(start, limit=stop, delta=step, name='arange')
if dtype != 'int32':
result = K.cast(result, dtype)
return result
@oarriaga
oarriaga / ProgrammaticNotebook.ipynb
Created May 24, 2017 09:46 — forked from fperez/ProgrammaticNotebook.ipynb
Creating an IPython Notebook programatically
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
"""
This is a batched LSTM forward and backward pass
"""
import numpy as np
import code
class LSTM:
@staticmethod
def init(input_size, hidden_size, fancy_forget_bias_init = 3):
@oarriaga
oarriaga / create_prior_box.py
Created January 27, 2017 16:28 — forked from codingPingjun/create_prior_box.py
SSD prior box creation
import pickle
import numpy as np
import pdb
img_width, img_height = 300, 300
box_configs = [
{'layer_width': 38, 'layer_height': 38, 'num_prior': 3, 'min_size': 30.0,
'max_size': None, 'aspect_ratios': [1.0, 2.0, 1/2.0]},
{'layer_width': 19, 'layer_height': 19, 'num_prior': 6, 'min_size': 60.0,
'max_size': 114.0, 'aspect_ratios': [1.0, 1.0, 2.0, 1/2.0, 3.0, 1/3.0]},
@oarriaga
oarriaga / attention_lstm.py
Created January 12, 2017 00:49 — forked from mbollmann/attention_lstm.py
My attempt at creating an LSTM with attention in Keras
class AttentionLSTM(LSTM):
"""LSTM with attention mechanism
This is an LSTM incorporating an attention mechanism into its hidden states.
Currently, the context vector calculated from the attended vector is fed
into the model's internal states, closely following the model by Xu et al.
(2016, Sec. 3.1.2), using a soft attention model following
Bahdanau et al. (2014).
The layer expects two inputs instead of the usual one: