Skip to content

Instantly share code, notes, and snippets.

View oarriaga's full-sized avatar
😵‍💫

Octavio Arriaga oarriaga

😵‍💫
View GitHub Profile
@jin-zhe
jin-zhe / actionrecognitiondatasets.md
Last active March 18, 2024 23:47
An overview of action recognition datasets and their detection classes

Activity Recognition Datasets

An overview of recent action recognition datasets and their detection classes

Concepts & terminologies:

  • Action: Atomic low-level movement such as standing up, sitting down, walking, talking etc.
  • Activity/event: Higher level occurence then actions such as dining, playing, dancing
  • Trimmed video: A short video clip containing event/action/activity of interest
  • Untrimmed video: A video clip of arbitrary length potentially containing durations without activities of interest
  • Localization: locating an instance of event/action/activity within a video at a spatial or temporal scale
  • Spatial localization: Locating the region/area of an instance of action/activity within a video
@batFINGER
batFINGER / uv_mercator_project.py
Created September 6, 2017 21:57
Mercator UV Projection
bl_info = {
"name": "Mercator Project",
"author": "batFINGER",
"version": (1, 0),
"blender": (2, 79, 0),
"location": "View3D > Mesh > UV UnWrap > Mercator Project",
"description": "UV Mercator Projection",
"warning": "",
"wiki_url": "",
"category": "UV",
@codingPingjun
codingPingjun / create_prior_box.py
Created January 27, 2017 03:11
SSD prior box creation
import pickle
import numpy as np
import pdb
img_width, img_height = 300, 300
box_configs = [
{'layer_width': 38, 'layer_height': 38, 'num_prior': 3, 'min_size': 30.0,
'max_size': None, 'aspect_ratios': [1.0, 2.0, 1/2.0]},
{'layer_width': 19, 'layer_height': 19, 'num_prior': 6, 'min_size': 60.0,
'max_size': 114.0, 'aspect_ratios': [1.0, 1.0, 2.0, 1/2.0, 3.0, 1/3.0]},
@cbaziotis
cbaziotis / AttentionWithContext.py
Last active April 25, 2022 14:37
Keras Layer that implements an Attention mechanism, with a context/query vector, for temporal data. Supports Masking. Follows the work of Yang et al. [https://www.cs.cmu.edu/~diyiy/docs/naacl16.pdf] "Hierarchical Attention Networks for Document Classification"
def dot_product(x, kernel):
"""
Wrapper for dot product operation, in order to be compatible with both
Theano and Tensorflow
Args:
x (): input
kernel (): weights
Returns:
"""
if K.backend() == 'tensorflow':
@mbollmann
mbollmann / attention_lstm.py
Last active June 26, 2023 10:08
My attempt at creating an LSTM with attention in Keras
class AttentionLSTM(LSTM):
"""LSTM with attention mechanism
This is an LSTM incorporating an attention mechanism into its hidden states.
Currently, the context vector calculated from the attended vector is fed
into the model's internal states, closely following the model by Xu et al.
(2016, Sec. 3.1.2), using a soft attention model following
Bahdanau et al. (2014).
The layer expects two inputs instead of the usual one:
@mbollmann
mbollmann / hidden_state_lstm.py
Created August 17, 2016 10:02
Keras LSTM that inputs/outputs its internal states, e.g. for hidden state transfer
from keras import backend as K
from keras.layers.recurrent import LSTM
class HiddenStateLSTM(LSTM):
"""LSTM with input/output capabilities for its hidden state.
This layer behaves just like an LSTM, except that it accepts further inputs
to be used as its initial states, and returns additional outputs,
representing the layer's final states.
@falcondai
falcondai / guided_relu.py
Last active April 1, 2021 09:12
Tensorflow implementation of guided backpropagation through ReLU
import tensorflow as tf
from tensorflow.python.framework import ops
from tensorflow.python.ops import gen_nn_ops
@ops.RegisterGradient("GuidedRelu")
def _GuidedReluGrad(op, grad):
return tf.select(0. < grad, gen_nn_ops._relu_grad(grad, op.outputs[0]), tf.zeros(grad.get_shape()))
if __name__ == '__main__':
with tf.Session() as sess:
@karpathy
karpathy / pg-pong.py
Created May 30, 2016 22:50
Training a Neural Network ATARI Pong agent with Policy Gradients from raw pixels
""" Trains an agent with (stochastic) Policy Gradients on Pong. Uses OpenAI Gym. """
import numpy as np
import cPickle as pickle
import gym
# hyperparameters
H = 200 # number of hidden layer neurons
batch_size = 10 # every how many episodes to do a param update?
learning_rate = 1e-4
gamma = 0.99 # discount factor for reward
@karpathy
karpathy / gist:587454dc0146a6ae21fc
Last active March 19, 2024 05:50
An efficient, batched LSTM.
"""
This is a batched LSTM forward and backward pass
"""
import numpy as np
import code
class LSTM:
@staticmethod
def init(input_size, hidden_size, fancy_forget_bias_init = 3):
@Qwlouse
Qwlouse / lstm_reference.ipynb
Last active December 15, 2020 18:54
LSTM Reference Implementation in Python
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.