Skip to content

Instantly share code, notes, and snippets.

@stephenhardy
stephenhardy / git-clearHistory
Created April 26, 2013 22:14
Steps to clear out the history of a git/github repository
-- Remove the history from
rm -rf .git
-- recreate the repos from the current content only
git init
git add .
git commit -m "Initial commit"
-- push to the github remote repos ensuring you overwrite history
git remote add origin git@github.com:<YOUR ACCOUNT>/<YOUR REPOS>.git
@craffel
craffel / draw_neural_net.py
Created January 10, 2015 04:59
Draw a neural network diagram with matplotlib!
import matplotlib.pyplot as plt
def draw_neural_net(ax, left, right, bottom, top, layer_sizes):
'''
Draw a neural network cartoon using matplotilb.
:usage:
>>> fig = plt.figure(figsize=(12, 12))
>>> draw_neural_net(fig.gca(), .1, .9, .1, .9, [4, 7, 2])
import tensorflow as tf
from tensorflow.python.framework import ops
import numpy as np
# Define custom py_func which takes also a grad op as argument:
def py_func(func, inp, Tout, stateful=True, name=None, grad=None):
# Need to generate a unique name to avoid duplicates:
rnd_name = 'PyFuncGrad' + str(np.random.randint(0, 1E+8))
# author: Aaditya Prakash
# NVIDIA-SMI does not show the full command, and when it was launched and its RAM usage.
# PS does but it does but you need PIDs for that
# lsof /dev/nvidia gives PIDs but only for the user invoking it
# usage:
# python programs_on_gpu.py
# Sample Output
@rtqichen
rtqichen / pytorch_weight_norm.py
Last active May 11, 2023 06:58
Pytorch weight normalization - works for all nn.Module (probably)
## Weight norm is now added to pytorch as a pre-hook, so use that instead :)
import torch
import torch.nn as nn
from torch.nn import Parameter
from functools import wraps
class WeightNorm(nn.Module):
append_g = '_g'
append_v = '_v'
@MInner
MInner / gpu_profile.py
Created September 12, 2017 16:11
A script to generate per-line GPU memory usage trace. For more meaningful results set `CUDA_LAUNCH_BLOCKING=1`.
import datetime
import linecache
import os
import pynvml3
import torch
print_tensor_sizes = True
last_tensor_sizes = set()
gpu_profile_fn = f'{datetime.datetime.now():%d-%b-%y-%H:%M:%S}-gpu_mem_prof.txt'
@peteflorence
peteflorence / pytorch_bilinear_interpolation.md
Last active June 30, 2024 01:26
Bilinear interpolation in PyTorch, and benchmarking vs. numpy

Here's a simple implementation of bilinear interpolation on tensors using PyTorch.

I wrote this up since I ended up learning a lot about options for interpolation in both the numpy and PyTorch ecosystems. More generally than just interpolation, too, it's also a nice case study in how PyTorch magically can put very numpy-like code on the GPU (and by the way, do autodiff for you too).

For interpolation in PyTorch, this open issue calls for more interpolation features. There is now a nn.functional.grid_sample() feature but at least at first this didn't look like what I needed (but we'll come back to this later).

In particular I wanted to take an image, W x H x C, and sample it many times at different random locations. Note also that this is different than upsampling which exhaustively samples and also doesn't give us fle

import sys
from collections import OrderedDict
PY2 = sys.version_info[0] == 2
_internal_attrs = {'_backend', '_parameters', '_buffers', '_backward_hooks', '_forward_hooks', '_forward_pre_hooks', '_modules'}
class Scope(object):
def __init__(self):
self._modules = OrderedDict()
@Miladiouss
Miladiouss / Select_CIFAR10_Classes.py
Last active October 27, 2023 03:41
Create PyTorch datasets and dataset loaders for a subset of CIFAR10 classes.
import torchvision
import torchvision.transforms as transforms
from torchvision.datasets import CIFAR10
from torch.utils.data import Dataset, DataLoader
import numpy as np
# Transformations
RC = transforms.RandomCrop(32, padding=4)
RHF = transforms.RandomHorizontalFlip()
RVF = transforms.RandomVerticalFlip()
@sajadn
sajadn / data_dependent_weight_norm_init.py
Last active April 29, 2020 18:38
Data dependent initialization of weight norm layers (convolutional)
import torch.nn as nn
from torch import _weight_norm
from torch.nn.utils import weight_norm
def data_dependent_init(model, data_batch):
def init_hook_(module, input, output):
std, mean = torch.std_mean(output, dim=[0, 2, 3])
g = getattr(module, 'weight_g')
g.data = g.data/std.reshape((len(std), 1, 1, 1))