Skip to content

Instantly share code, notes, and snippets.

View vyraun's full-sized avatar

Vikas Raunak vyraun

View GitHub Profile
@vyraun
vyraun / ec2_gpu_theano.md
Created August 28, 2016 05:49 — forked from stevetjoa/ec2_gpu_theano.md
Setup: Amazon AWS EC2 with NVIDIA CUDA GPU and Theano

Setup: Amazon AWS EC2 with NVIDIA CUDA GPU and Theano

2015 Sep 22: nvidia-352 seems to have disappeared from the repos.

  1. Spot request Ubuntu Server 14.04; add storage; login with ssh
  2. sudo apt-get update
  3. sudo apt-get upgrade
  4. sudo apt-get dist-upgrade
  5. sudo apt-get install git gcc g++ gfortran build-essential python-dev python-pip python-matplotlib python-scipy libhdf5-dev linux-image-extra-virtual
  6. sudo pip install --upgrade pip
@vyraun
vyraun / QRN.md
Created September 24, 2016 19:01 — forked from shagunsodhani/QRN.md
Notes for "Query Regression Networks for Machine Comprehension" Paper

Query Regression Networks for Machine Comprehension

Introduction

  • Machine Comprehension (MC) - given a natural language sentence, answer a natural language question.
  • End-To-End MC - can not use language resources like dependency parsers. The only supervision during training is the correct answer.
  • Query Regression Network (QRN) - Variant of Recurrent Neural Network (RNN).
  • Link to the paper

Related Work

@vyraun
vyraun / ec2_caffe
Created October 1, 2016 10:56 — forked from baraldilorenzo/ec2_caffe
Install Caffe on Amazon EC2 g2.2xlarge instance
#! /bin/bash
# Upgrade
sudo aptitude update
sudo aptitude full-upgrade -y
# Install CUDA
wget http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1404/x86_64/cuda-repo-ubuntu1404_6.5-14_amd64.deb
sudo dpkg -i cuda-repo-ubuntu1404_6.5-14_amd64.deb
sudo aptitude update
@vyraun
vyraun / tf_lstm.py
Created October 4, 2016 15:04 — forked from siemanko/tf_lstm.py
Simple implementation of LSTM in Tensorflow in 50 lines (+ 130 lines of data generation and comments)
"""Short and sweet LSTM implementation in Tensorflow.
Motivation:
When Tensorflow was released, adding RNNs was a bit of a hack - it required
building separate graphs for every number of timesteps and was a bit obscure
to use. Since then TF devs added things like `dynamic_rnn`, `scan` and `map_fn`.
Currently the APIs are decent, but all the tutorials that I am aware of are not
making the best use of the new APIs.
Advantages of this implementation:
@vyraun
vyraun / dynamic_tsp.py
Created October 12, 2016 16:12 — forked from mlalevic/dynamic_tsp.py
Simple Python implementation of dynamic programming algorithm for the Traveling salesman problem
def solve_tsp_dynamic(points):
#calc all lengths
all_distances = [[length(x,y) for y in points] for x in points]
#initial value - just distance from 0 to every other point + keep the track of edges
A = {(frozenset([0, idx+1]), idx+1): (dist, [0,idx+1]) for idx,dist in enumerate(all_distances[0][1:])}
cnt = len(points)
for m in range(2, cnt):
B = {}
for S in [frozenset(C) | {0} for C in itertools.combinations(range(1, cnt), m)]:
for j in S - {0}:
@vyraun
vyraun / min-char-rnn.py
Created October 24, 2016 09:27 — forked from karpathy/min-char-rnn.py
Minimal character-level language model with a Vanilla Recurrent Neural Network, in Python/numpy
"""
Minimal character-level Vanilla RNN model. Written by Andrej Karpathy (@karpathy)
BSD License
"""
import numpy as np
# data I/O
data = open('input.txt', 'r').read() # should be simple plain text file
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)
@vyraun
vyraun / notebook-xkcd-style-plot
Created November 25, 2016 07:15 — forked from juhasch/notebook-xkcd-style-plot
IPython notebook containing a xkcd style plot
{
"metadata": {
"name": "xkcd-style-plot"
},
"nbformat": 3,
"nbformat_minor": 0,
"worksheets": [
{
"cells": [
{
@vyraun
vyraun / _Instructions.md
Created December 1, 2016 13:51 — forked from genekogan/_Instructions.md
instructions for generating a style transfer animation from a video

Instructions for making a Neural-Style movie

The following instructions are for creating your own animations using the style transfer technique described by Gatys, Ecker, and Bethge, and implemented by Justin Johnson. To see an example of such an animation, see this video of Alice in Wonderland re-styled by 17 paintings.

Setting up the environment

The easiest way to set up the environment is to simply load Samim's a pre-built Terminal.com snap or use another cloud service like Amazon EC2. Unfortunately the g2.2xlarge GPU instances cost $0.99 per hour, and depending on parameters selected, it may take 10-15 minutes to produce a 512px-wide image, so it can cost $2-3 to generate 1 sec of video at 12fps.

If you do load the

@vyraun
vyraun / process_word2vec.lua
Created December 5, 2016 08:52 — forked from ili3p/process_word2vec.lua
Reading 5.3GB text file with LuaJIT
local words = torch.load(opt.words) -- it's a tds.Hash
local word2vec = torch.FloatTensor(opt.vocabsz, opt.dim)
local buffsz = 2^13 -- == 8k
local f = io.input(opt.input)
local done = 0
local unk
-- read huge word2vec file with 2,196,017 lines
while true do
local lines, leftover = f:read(buffsz, '*line')
@vyraun
vyraun / readme.md
Created December 5, 2016 20:04 — forked from baraldilorenzo/readme.md
VGG-16 pre-trained model for Keras

##VGG16 model for Keras

This is the Keras model of the 16-layer network used by the VGG team in the ILSVRC-2014 competition.

It has been obtained by directly converting the Caffe model provived by the authors.

Details about the network architecture can be found in the following arXiv paper:

Very Deep Convolutional Networks for Large-Scale Image Recognition

K. Simonyan, A. Zisserman