Skip to content

Instantly share code, notes, and snippets.

View hzyjerry's full-sized avatar
brewing

Jerry Zhi-Yang He hzyjerry

brewing
View GitHub Profile
@unixpickle
unixpickle / maml.py
Created October 12, 2019 19:08
MAML in PyTorch
import torch
import torch.nn.functional as F
def maml_grad(model, inputs, outputs, lr, batch=1):
"""
Update a model's gradient using MAML.
The gradient will point in the direction that
improves the total loss across all inner-loop
@phillipi
phillipi / biggan_slerp
Last active October 8, 2023 01:25
Slerp through the BigGAN latent space
# to be used in conjunction with the functions defined here:
# https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/biggan_generation_with_tf_hub.ipynb
# party parrot transformation
noise_seed_A = 3 # right facing
noise_seed_B = 31 # left facing
num_interps = 14
truncation = 0.2
category = 14
@HarshTrivedi
HarshTrivedi / pad_packed_demo.py
Last active March 2, 2024 16:49 — forked from Tushar-N/pad_packed_demo.py
Minimal tutorial on packing (pack_padded_sequence) and unpacking (pad_packed_sequence) sequences in pytorch.
import torch
from torch import LongTensor
from torch.nn import Embedding, LSTM
from torch.autograd import Variable
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence
## We want to run LSTM on a batch of 3 character sequences ['long_str', 'tiny', 'medium']
#
# Step 1: Construct Vocabulary
# Step 2: Load indexed data (list of instances, where each instance is list of character indices)
@8enmann
8enmann / reinstall.sh
Last active October 12, 2021 06:07
Reinstall NVIDIA drivers without opengl Ubuntu 16.04 GTX 1080ti
# Download installers
mkdir ~/Downloads/nvidia
cd ~/Downloads/nvidia
wget https://developer.nvidia.com/compute/cuda/8.0/Prod2/local_installers/cuda_8.0.61_375.26_linux-run
wget http://us.download.nvidia.com/XFree86/Linux-x86_64/384.59/NVIDIA-Linux-x86_64-384.59.run
sudo chmod +x NVIDIA-Linux-x86_64-384.59.run
sudo chmod +x cuda_8.0.61_375.26_linux-run
./cuda_8.0.61_375.26_linux-run -extract=~/Downloads/nvidia/
# Uninstall old stuff
sudo apt-get --purge remove nvidia-*

NIPS Notes

Tutorials

Variational Inference

  • Simple Intro by Blei mostly going over review paper of Jordan
  • Later introduce SVI (Stochastic VI) as a remedy to solve VI tractably with large dataset.
  • Review the black box inference -assumption free VI- http://www.jmlr.org/proceedings/papers/v33/ranganath14.pdf
  • Key idea is replacing gradient and the expectation in VI formulation. Since expectation reqiuires exponential family assumption to work replacing expectation and gradient solves this if overall method is stochastic since your samples are unbiased gradient estimates satisfying Robinson-Monroe conditions however the variance is very large and it requires even further tricks
@awesomebytes
awesomebytes / alienware15r3_install_ubuntu14.04.05.md
Created December 8, 2016 09:37
Install Ubuntu 14.04 on Alienware 15 R3 instructions

How to install Ubuntu 14.04.05 on Alienware 15 R3

Tiny guide to install Ubuntu 14.04.05 on a brand new Alienware 15 R3.

Let windows 10 install

Just next, next, next filling up your data.

You should get a BIOS update alert from the Alienware Update widget. If not, right click on the Down arrow icon in the bottom right extra icons ^ thing and right click, then click Check for Updates.

@awjuliani
awjuliani / ContextualPolicy.ipynb
Last active October 11, 2022 21:27
A Policy-Gradient algorithm that solves Contextual Bandit problems.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@noelboss
noelboss / git-deployment.md
Last active March 7, 2024 02:21
Simple automated GIT Deployment using Hooks

Simple automated GIT Deployment using GIT Hooks

Here are the simple steps needed to create a deployment from your local GIT repository to a server based on this in-depth tutorial.

How it works

You are developing in a working-copy on your local machine, lets say on the master branch. Most of the time, people would push code to a remote server like github.com or gitlab.com and pull or export it to a production server. Or you use a service like deepl.io to act upon a Web-Hook that's triggered that service.

@wangruohui
wangruohui / Install NVIDIA Driver and CUDA.md
Last active March 21, 2024 16:55
Install NVIDIA Driver and CUDA on Ubuntu / CentOS / Fedora Linux OS
@protrolium
protrolium / terminal-gif.md
Last active February 15, 2024 09:09
convert images to GIF in Terminal

Install ImageMagick

brew install ImageMagick

Pull specific region of frames from video file w/ ffmpeg

ffmpeg -ss 14:55 -i video.mkv -t 5 -s 480x270 -f image2 %04d.png

  • -ss 14:55 gives the timestamp where I want FFmpeg to start, as a duration string.
  • -t 5 says how much I want FFmpeg to decode, using the same duration syntax as for -ss.
  • -s 480x270 tells FFmpeg to resize the video output to 480 by 270 pixels.
  • -f image2 selects the output format, a series of still images — make sure there are leading zeros in filename.