Skip to content

Instantly share code, notes, and snippets.

Jaemin Cho j-min

Block or report user

Report or block j-min

Hide content and notifications from this user.

Learn more about blocking users

Contact Support about this user’s behavior.

Learn more about reporting abuse

Report abuse
View GitHub Profile
@karpathy
karpathy / min-char-rnn.py
Last active Oct 14, 2019
Minimal character-level language model with a Vanilla Recurrent Neural Network, in Python/numpy
View min-char-rnn.py
"""
Minimal character-level Vanilla RNN model. Written by Andrej Karpathy (@karpathy)
BSD License
"""
import numpy as np
# data I/O
data = open('input.txt', 'r').read() # should be simple plain text file
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)
View NLTK_Stanford_2015-12-09.md

NLTK API to Stanford NLP Tools compiled on 2015-12-09

Stanford NER

With NLTK version 3.1 and Stanford NER tool 2015-12-09, it is possible to hack the StanfordNERTagger._stanford_jar to include other .jar files that are necessary for the new tagger.

First set up the environment variables as per instructed at https://github.com/nltk/nltk/wiki/Installing-Third-Party-Software

@wangruohui
wangruohui / Install NVIDIA Driver and CUDA.md
Last active Oct 14, 2019
Install NVIDIA Driver and CUDA on Ubuntu / CentOS / Fedora Linux OS
View Install NVIDIA Driver and CUDA.md
@application2000
application2000 / how-to-install-latest-gcc-on-ubuntu-lts.txt
Last active Oct 13, 2019
How to install latest gcc on Ubuntu LTS (12.04, 14.04, 16.04)
View how-to-install-latest-gcc-on-ubuntu-lts.txt
These commands are based on a askubuntu answer http://askubuntu.com/a/581497
To install gcc-6 (gcc-6.1.1), I had to do more stuff as shown below.
USE THOSE COMMANDS AT YOUR OWN RISK. I SHALL NOT BE RESPONSIBLE FOR ANYTHING.
ABSOLUTELY NO WARRANTY.
If you are still reading let's carry on with the code.
sudo apt-get update && \
sudo apt-get install build-essential software-properties-common -y && \
sudo add-apt-repository ppa:ubuntu-toolchain-r/test -y && \
View gumbel-softmax.py
def sample_gumbel(shape, eps=1e-20):
"""Sample from Gumbel(0, 1)"""
U = tf.random_uniform(shape,minval=0,maxval=1)
return -tf.log(-tf.log(U + eps) + eps)
def gumbel_softmax_sample(logits, temperature):
""" Draw a sample from the Gumbel-Softmax distribution"""
y = logits + sample_gumbel(tf.shape(logits))
return tf.nn.softmax( y / temperature)
@ririw
ririw / RWA.py
Created Apr 16, 2017
Recurrent Weighted Average RNN in pytorch
View RWA.py
# An implementation of "Machine Learning on Sequential Data Using a Recurrent Weighted Average" using pytorch
# https://arxiv.org/pdf/1703.01253.pdf
#
#
# This is a RNN (recurrent neural network) type that uses a weighted average of values seen in the past, rather
# than a separate running state.
#
# Check the test code at the bottom for an example of usage, where you can compare it's performance
# against LSTM and GRU, at a classification task from the paper. It handily beats both the LSTM and
# GRU :)
@arundasan91
arundasan91 / CaffeInstallation.md
Created Apr 2, 2016
Caffe Installation Tutorial for beginners
View CaffeInstallation.md

Caffe

Freshly brewed !

With the availability of huge amount of data for research and powerfull machines to run your code on, Machine Learning and Neural Networks is gaining their foot again and impacting us more than ever in our everyday lives. With huge players like Google opensourcing part of their Machine Learning systems like the TensorFlow software library for numerical computation, there are many options for someone interested in starting off with Machine Learning/Neural Nets to choose from. Caffe, a deep learning framework developed by the Berkeley Vision and Learning Center (BVLC) and its contributors, comes to the play with a fresh cup of coffee.

Installation Instructions (Ubuntu 14 Trusty)

The following section is divided in to two parts. Caffe's documentation suggest

View internals.md

A Tour of PyTorch Internals (Part I)

The fundamental unit in PyTorch is the Tensor. This post will serve as an overview for how we implement Tensors in PyTorch, such that the user can interact with it from the Python shell. In particular, we want to answer four main questions:

  1. How does PyTorch extend the Python interpreter to define a Tensor type that can be manipulated from Python code?
  2. How does PyTorch wrap the C libraries that actually define the Tensor's properties and methods?
  3. How does PyTorch cwrap work to generate code for Tensor methods?
  4. How does PyTorch's build system take all of these components to compile and generate a workable application?

Extending the Python Interpreter

PyTorch defines a new package torch. In this post we will consider the ._C module. This module is known as an "extension module" - a Python module written in C. Such modules allow us to define new built-in object types (e.g. the Tensor) and to call C/C++ functions.

@VikingPenguinYT
VikingPenguinYT / dropout_bayesian_approximation_tensorflow.py
Last active Sep 19, 2019
Implementing Dropout as a Bayesian Approximation in TensorFlow
View dropout_bayesian_approximation_tensorflow.py
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.contrib.distributions import Bernoulli
class VariationalDense:
"""Variational Dense Layer Class"""
def __init__(self, n_in, n_out, model_prob, model_lam):
self.model_prob = model_prob
@vadimkantorov
vadimkantorov / compact_bilinear_pooling.py
Last active Sep 16, 2019
Compact Bilinear Pooling in PyTorch using the new FFT support
View compact_bilinear_pooling.py
import torch
class CompactBilinearPooling(torch.nn.Module):
def __init__(self, input_dim1, input_dim2, output_dim, sum_pool = True):
super(CompactBilinearPooling, self).__init__()
self.output_dim = output_dim
self.sum_pool = sum_pool
generate_sketch_matrix = lambda rand_h, rand_s, input_dim, output_dim: torch.sparse.FloatTensor(torch.stack([torch.arange(input_dim, out = torch.LongTensor()), rand_h.long()]), rand_s.float(), [input_dim, output_dim]).to_dense()
self.sketch1 = torch.nn.Parameter(generate_sketch_matrix(torch.randint(output_dim, size = (input_dim1,)), 2 * torch.randint(2, size = (input_dim1,)) - 1, input_dim1, output_dim), requires_grad = False)
self.sketch2 = torch.nn.Parameter(generate_sketch_matrix(torch.randint(output_dim, size = (input_dim2,)), 2 * torch.randint(2, size = (input_dim2,)) - 1, input_dim2, output_dim), requires_grad = False)
You can’t perform that action at this time.