Skip to content

Instantly share code, notes, and snippets.

View soroushmehr's full-sized avatar

Soroush Mehri soroushmehr

  • Microsoft Research (prev. Maluuba and MILA-UdeM)
  • Vancouver, BC
View GitHub Profile
@soroushmehr
soroushmehr / jupyter_gym_render.py
Created July 25, 2019 17:26 — forked from thomelane/jupyter_gym_render.py
OpenAI Gym render in Jupyter
import gym
from IPython import display
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
env = gym.make('Breakout-v0')
env.reset()
img = plt.imshow(env.render(mode='rgb_array')) # only call this once
for _ in range(100):
@soroushmehr
soroushmehr / pytorch-simple-rnn.py
Created August 10, 2018 21:05 — forked from spro/pytorch-simple-rnn.py
PyTorch RNN training example
import torch
import torch.nn as nn
from torch.nn import functional as F
from torch.autograd import Variable
from torch import optim
import numpy as np
import math, random
# Generating a noisy multi-sin wave
import types
import tensorflow as tf
import numpy as np
# Expressions are represented as lists of lists,
# in lisp style -- the symbol name is the head (first element)
# of the list, and the arguments follow.
# add an expression to an expression list, recursively if necessary.
def add_expr_to_list(exprlist, expr):
@soroushmehr
soroushmehr / TestDense&HW.ipynb
Created June 1, 2017 20:10 — forked from mickypaganini/TestDense&HW.ipynb
Test Dense and Highway layers in Keras using the same inputs as in our lwtnn-test-highway.cxx
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

tmux cheatsheet

As configured in my dotfiles.

start new:

tmux

start new with session name:

@soroushmehr
soroushmehr / latency.txt
Created January 31, 2017 16:21 — forked from jboner/latency.txt
Latency Numbers Every Programmer Should Know
Latency Comparison Numbers
--------------------------
L1 cache reference 0.5 ns
Branch mispredict 5 ns
L2 cache reference 7 ns 14x L1 cache
Mutex lock/unlock 25 ns
Main memory reference 100 ns 20x L2 cache, 200x L1 cache
Compress 1K bytes with Zippy 3,000 ns 3 us
Send 1K bytes over 1 Gbps network 10,000 ns 10 us
Read 4K randomly from SSD* 150,000 ns 150 us ~1GB/sec SSD
@soroushmehr
soroushmehr / mnist_pytorch.py
Created January 23, 2017 17:57 — forked from kastnerkyle/mnist_pytorch.py
MNIST test in PyTorch, performance still TBD
# Author: Kyle Kastner
# License: BSD 3-Clause
import torch as th
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
import torch.nn.functional as F
from torch.utils.data import TensorDataset, DataLoader
import time
import numpy as np
@soroushmehr
soroushmehr / min-char-rnn.py
Created September 2, 2016 19:36 — forked from karpathy/min-char-rnn.py
Minimal character-level language model with a Vanilla Recurrent Neural Network, in Python/numpy
"""
Minimal character-level Vanilla RNN model. Written by Andrej Karpathy (@karpathy)
BSD License
"""
import numpy as np
# data I/O
data = open('input.txt', 'r').read() # should be simple plain text file
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)
"""
This is a batched LSTM forward and backward pass
"""
import numpy as np
import code
class LSTM:
@staticmethod
def init(input_size, hidden_size, fancy_forget_bias_init = 3):
import math
import random
def get_random_neighbour(state):
neighbour = [house[:] for house in state] # Deep copy
i = random.randint(0, 4)
j = random.choice(range(0, i) + range(i+1, 4))
attr_idx = random.randint(0, 4)