Skip to content

Instantly share code, notes, and snippets.

View vinhkhuc's full-sized avatar

Vinh Khuc vinhkhuc

View GitHub Profile
@volkancirik
volkancirik / treernn.py
Last active October 9, 2018 13:01
Pytorch TreeRNN
"""
TreeLSTM[1] implementation in Pytorch
Based on dynet benchmarks :
https://github.com/neulab/dynet-benchmark/blob/master/dynet-py/treenn.py
https://github.com/neulab/dynet-benchmark/blob/master/chainer/treenn.py
Other References:
https://github.com/pytorch/examples/tree/master/word_language_model
https://github.com/pfnet/chainer/blob/29c67fe1f2140fa8637201505b4c5e8556fad809/chainer/functions/activation/slstm.py
https://github.com/stanfordnlp/treelstm
@karpathy
karpathy / pg-pong.py
Created May 30, 2016 22:50
Training a Neural Network ATARI Pong agent with Policy Gradients from raw pixels
""" Trains an agent with (stochastic) Policy Gradients on Pong. Uses OpenAI Gym. """
import numpy as np
import cPickle as pickle
import gym
# hyperparameters
H = 200 # number of hidden layer neurons
batch_size = 10 # every how many episodes to do a param update?
learning_rate = 1e-4
gamma = 0.99 # discount factor for reward
@kingjr
kingjr / hinge_vs_loss.py
Last active August 25, 2020 01:47
Illustrate how SVM and Logistic Regression are very similar except that SVM strictly relies on a subset of the data.
# Author: Jean-Remi King <jeanremi.king@gmail.com>
"""
Illustrate how a hinge loss and a log loss functions
typically used in SVM and Logistic Regression
respectively focus on a variable number of samples.
For simplification purposes, we won't consider the
regularization or penalty (C) factors.
"""
import numpy as np
import matplotlib.animation as animation
@tzutalin
tzutalin / deploy.prototxt
Last active May 11, 2018 10:01
Network In Network
name: "nin_imagenet"
input: "data"
input_shape {
dim: 10
dim: 3
dim: 224
dim: 224
}
layers {
bottom: "data"