Skip to content

Instantly share code, notes, and snippets.

View BarclayII's full-sized avatar

Quan (Andy) Gan BarclayII

  • AWS Shanghai
  • Shanghai
View GitHub Profile
@BarclayII
BarclayII / run_dagger.py
Last active May 25, 2017 05:48
Berkeley CS 294 Homework 1
#!/usr/bin/env python
"""
Code to load an expert policy and generate roll-out data for behavioral cloning.
Example usage:
python run_expert.py experts/Humanoid-v1.pkl Humanoid-v1 --render \
--num_rollouts 20
Author of this script and included expert policies: Jonathan Ho (hoj@openai.com)
"""
@BarclayII
BarclayII / bug.py
Created June 3, 2017 22:08
MXNet bug?
import argparse
import mxnet as mx
from mxnet import nn
from mxnet.contrib import autograd
import numpy as np
import time
import os
ngpu = 1
nz = 100
@BarclayII
BarclayII / maintf.py
Last active July 22, 2017 22:57
GAN generating audio. Probably does not work
import tensorflow as TF
import modeltf as model
import numpy as NP
import numpy.random as RNG
import h5py
import argparse
@BarclayII
BarclayII / mmd.py
Last active October 3, 2017 02:39
MMD
import torch as T
import numpy as NP
### Norm of vector difference
# Checker
def normdiff_assert(X, Y, normdiff):
norm = normdiff(X, Y)
for i in range(X.size()[2]):
@BarclayII
BarclayII / reinforce.py
Created November 1, 2017 03:59
PyTorch `reinforce()` function sucks so I keep the alternative solution here
import torch as T
import numpy as np
x = T.autograd.Variable(T.randn(5, 8), requires_grad=True)
p = T.nn.functional.softmax(x)
y = p.multinomial()
y.reinforce(T.ones(y.size()))
y.backward()
d = x.grad.data.clone().numpy()
x.grad.data.zero_()
logp = T.nn.functional.log_softmax(x)
@BarclayII
BarclayII / wgan.py
Last active May 17, 2018 21:42
WGAN in MXNet
# Modified from example/autograd/{data,dcgan}.py to make it standalone.
import argparse
import mxnet as mx
from mxnet import nn
from mxnet.contrib import autograd
import numpy as np
import matplotlib.pyplot as PL
import time
import os
@BarclayII
BarclayII / mips.py
Created June 21, 2018 08:44
Maximum Inner Product Search with Asymmetric LSH + Random Projection
# References:
# https://arxiv.org/pdf/1405.5869.pdf
# https://arxiv.org/abs/1507.05910
import numpy as np
from scipy.spatial.distance import cosine
# Rescaling and appending new components to "normalize" data vectors
X = np.random.randn(10000, 100)
Xn = np.sqrt((X ** 2).sum(1))
@BarclayII
BarclayII / spmvtest.py
Last active November 5, 2018 21:36
PyTorch gather-scatter/SPMV benchmarks
import torch
import time
N = 10000
D = 50
E = 500000
T = 10
t_gather = 0
t_scatter = 0
@BarclayII
BarclayII / parser.py
Last active January 16, 2019 15:59
PyTorch neural parser based on DyNet implementation
'''
Original implementation
https://github.com/clab/dynet_tutorial_examples/blob/master/tutorial_parser.ipynb
The code structure and variable names are similar for better reference.
Not for serious business, just for some comparison between PyTorch and DyNet
(and I still prefer PyTorch)
'''
import torch as T
@BarclayII
BarclayII / prof.py
Last active March 8, 2019 03:10
Batched GEMM profiling for transformers in PyTorch
# coding: utf-8
import torch
import time
import pandas as pd
import tqdm
B, L, N, H, W = 64, 50, 10, 256, 3
print('warming up')
for _ in tqdm.trange(10):