Skip to content

Instantly share code, notes, and snippets.

Jaemin Cho j-min

Block or report user

Report or block j-min

Hide content and notifications from this user.

Learn more about blocking users

Contact Support about this user’s behavior.

Learn more about reporting abuse

Report abuse
View GitHub Profile
@nikitakit
nikitakit / tf_beam_decoder.py
Last active Aug 16, 2019
Tensorflow Beam Search
View tf_beam_decoder.py
"""
Beam decoder for tensorflow
Sample usage:
```
from tf_beam_decoder import beam_decoder
decoded_sparse, decoded_logprobs = beam_decoder(
cell=cell,
@monikkinom
monikkinom / rnn-lstm.py
Last active Sep 3, 2019
Tensorflow RNN-LSTM implementation to count number of set bits in a binary string
View rnn-lstm.py
#Source code with the blog post at http://monik.in/a-noobs-guide-to-implementing-rnn-lstm-using-tensorflow/
import numpy as np
import random
from random import shuffle
import tensorflow as tf
# from tensorflow.models.rnn import rnn_cell
# from tensorflow.models.rnn import rnn
NUM_EXAMPLES = 10000
@P7h
P7h / tmux__CentOS__build_from_source.sh
Last active Sep 6, 2019
tmux 2.0 and tmux 2.3 installation steps for Ubuntu. Or build from tmux source v2.5 for Ubuntu and CentOS.
View tmux__CentOS__build_from_source.sh
# Steps to build and install tmux from source.
# Takes < 25 seconds on EC2 env [even on a low-end config instance].
VERSION=2.7
sudo yum -y remove tmux
sudo yum -y install wget tar libevent-devel ncurses-devel
wget https://github.com/tmux/tmux/releases/download/${VERSION}/tmux-${VERSION}.tar.gz
tar xzf tmux-${VERSION}.tar.gz
rm -f tmux-${VERSION}.tar.gz
cd tmux-${VERSION}
@vadimkantorov
vadimkantorov / compact_bilinear_pooling.py
Last active Sep 16, 2019
Compact Bilinear Pooling in PyTorch using the new FFT support
View compact_bilinear_pooling.py
import torch
class CompactBilinearPooling(torch.nn.Module):
def __init__(self, input_dim1, input_dim2, output_dim, sum_pool = True):
super(CompactBilinearPooling, self).__init__()
self.output_dim = output_dim
self.sum_pool = sum_pool
generate_sketch_matrix = lambda rand_h, rand_s, input_dim, output_dim: torch.sparse.FloatTensor(torch.stack([torch.arange(input_dim, out = torch.LongTensor()), rand_h.long()]), rand_s.float(), [input_dim, output_dim]).to_dense()
self.sketch1 = torch.nn.Parameter(generate_sketch_matrix(torch.randint(output_dim, size = (input_dim1,)), 2 * torch.randint(2, size = (input_dim1,)) - 1, input_dim1, output_dim), requires_grad = False)
self.sketch2 = torch.nn.Parameter(generate_sketch_matrix(torch.randint(output_dim, size = (input_dim2,)), 2 * torch.randint(2, size = (input_dim2,)) - 1, input_dim2, output_dim), requires_grad = False)
View internals.md

A Tour of PyTorch Internals (Part I)

The fundamental unit in PyTorch is the Tensor. This post will serve as an overview for how we implement Tensors in PyTorch, such that the user can interact with it from the Python shell. In particular, we want to answer four main questions:

  1. How does PyTorch extend the Python interpreter to define a Tensor type that can be manipulated from Python code?
  2. How does PyTorch wrap the C libraries that actually define the Tensor's properties and methods?
  3. How does PyTorch cwrap work to generate code for Tensor methods?
  4. How does PyTorch's build system take all of these components to compile and generate a workable application?

Extending the Python Interpreter

PyTorch defines a new package torch. In this post we will consider the ._C module. This module is known as an "extension module" - a Python module written in C. Such modules allow us to define new built-in object types (e.g. the Tensor) and to call C/C++ functions.

@ririw
ririw / RWA.py
Created Apr 16, 2017
Recurrent Weighted Average RNN in pytorch
View RWA.py
# An implementation of "Machine Learning on Sequential Data Using a Recurrent Weighted Average" using pytorch
# https://arxiv.org/pdf/1703.01253.pdf
#
#
# This is a RNN (recurrent neural network) type that uses a weighted average of values seen in the past, rather
# than a separate running state.
#
# Check the test code at the bottom for an example of usage, where you can compare it's performance
# against LSTM and GRU, at a classification task from the paper. It handily beats both the LSTM and
# GRU :)
View gumbel-softmax.py
def sample_gumbel(shape, eps=1e-20):
"""Sample from Gumbel(0, 1)"""
U = tf.random_uniform(shape,minval=0,maxval=1)
return -tf.log(-tf.log(U + eps) + eps)
def gumbel_softmax_sample(logits, temperature):
""" Draw a sample from the Gumbel-Softmax distribution"""
y = logits + sample_gumbel(tf.shape(logits))
return tf.nn.softmax( y / temperature)
View NLTK_Stanford_2015-12-09.md

NLTK API to Stanford NLP Tools compiled on 2015-12-09

Stanford NER

With NLTK version 3.1 and Stanford NER tool 2015-12-09, it is possible to hack the StanfordNERTagger._stanford_jar to include other .jar files that are necessary for the new tagger.

First set up the environment variables as per instructed at https://github.com/nltk/nltk/wiki/Installing-Third-Party-Software

@erikbern
erikbern / install-tensorflow.sh
Last active Oct 28, 2019
Installing TensorFlow on EC2
View install-tensorflow.sh
# Note – this is not a bash script (some of the steps require reboot)
# I named it .sh just so Github does correct syntax highlighting.
#
# This is also available as an AMI in us-east-1 (virginia): ami-cf5028a5
#
# The CUDA part is mostly based on this excellent blog post:
# http://tleyden.github.io/blog/2014/10/25/cuda-6-dot-5-on-aws-gpu-instance-running-ubuntu-14-dot-04/
# Install various packages
sudo apt-get update
@VikingPenguinYT
VikingPenguinYT / dropout_bayesian_approximation_tensorflow.py
Last active Oct 29, 2019
Implementing Dropout as a Bayesian Approximation in TensorFlow
View dropout_bayesian_approximation_tensorflow.py
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.contrib.distributions import Bernoulli
class VariationalDense:
"""Variational Dense Layer Class"""
def __init__(self, n_in, n_out, model_prob, model_lam):
self.model_prob = model_prob
You can’t perform that action at this time.