Skip to content

Instantly share code, notes, and snippets.

Avatar

Arun Mallya arunmallya

View GitHub Profile
@arunmallya
arunmallya / parallel.py
Created Oct 22, 2018 — forked from thomwolf/parallel.py
Data Parallelism in PyTorch for modules and losses
View parallel.py
##+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
## Created by: Hang Zhang, Rutgers University, Email: zhang.hang@rutgers.edu
## Modified by Thomas Wolf, HuggingFace Inc., Email: thomas@huggingface.co
## Copyright (c) 2017-2018
##
## This source code is licensed under the MIT-style license found in the
## LICENSE file in the root directory of this source tree
##+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
"""Encoding Data Parallel"""
@arunmallya
arunmallya / test-Copy1.ipynb
Created Aug 17, 2018
Normal operation on small tensor
View test-Copy1.ipynb
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@arunmallya
arunmallya / test.ipynb
Created Aug 17, 2018
Inplace normal_( ) bug
View test.ipynb
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@arunmallya
arunmallya / modconv.py
Created Feb 20, 2018
Convolution with masking support.
View modconv.py
class ElementWiseConv2d(nn.Module):
"""Modified conv. Do we need mask for biases too?"""
def __init__(self, in_channels, out_channels, kernel_size, stride=1,
padding=0, dilation=1, groups=1, bias=True,
mask_init='1s', mask_scale=1e-2,
threshold_fn='binarizer', threshold=None):
super(ElementWiseConv2d, self).__init__()
kernel_size = _pair(kernel_size)
stride = _pair(stride)
@arunmallya
arunmallya / binarizer.py
Created Feb 20, 2018
Autograd snippet for Binarizer
View binarizer.py
DEFAULT_THRESHOLD = 5e-3
class Binarizer(torch.autograd.Function):
"""Binarizes {0, 1} a real valued tensor."""
def __init__(self, threshold=DEFAULT_THRESHOLD):
super(Binarizer, self).__init__()
self.threshold = threshold
def forward(self, inputs):
@arunmallya
arunmallya / matches.txt
Created Jan 18, 2018
Files of CUBS test present in ImageNet train
View matches.txt
American_Goldfinch_0062_31921.jpg -> n01531178_12730.JPEG
Indigo_Bunting_0063_11820.jpg -> n01537544_9540.JPEG
Blue_Jay_0053_62744.jpg -> n01580077_4622.JPEG
American_Goldfinch_0131_32911.jpg -> n01531178_17834.JPEG
Dark_Eyed_Junco_0057_68650.jpg -> n01534433_12777.JPEG
Indigo_Bunting_0051_12837.jpg -> n01537544_2126.JPEG
Dark_Eyed_Junco_0102_67402.jpg -> n01534433_9482.JPEG
American_Goldfinch_0012_32338.jpg -> n01531178_14394.JPEG
Laysan_Albatross_0033_658.jpg -> n02058221_16284.JPEG
Black_Footed_Albatross_0024_796089.jpg -> n02058221_6390.JPEG
View boolean_idx.py
from __future__ import print_function
import torch
A = torch.rand(4, 4)
An = A.numpy()
idx = torch.ByteTensor([1, 0, 0, 1])
idxn = [True, False, False, True]
# Numpy indexing.
View indexing_bug.py
from __future__ import print_function
import torch
A = torch.rand(4)
idx = torch.LongTensor([0, 3])
An = A.numpy()
idxn = idx.numpy()
# Numpy indexing.
View forward.py
import torch.nn as nn
def myModule(nn.Module):
def __init__(self):
# Init stuff here
self.X = nn.Sequential(
nn.Linear(num_input_genes, num_tfs),
nn.ReLU(),
nn.BatchNorm1d(num_tfs)
)
@arunmallya
arunmallya / bug.py
Created Jun 20, 2017
Exposes bug with DataParallel when using dicts as input
View bug.py
import torch
import torch.nn as nn
from torch.autograd import Variable
class SimpleModel(nn.Module):
def __init__(self):
super(SimpleModel, self).__init__()
self.net = nn.Linear(10, 2)
def forward(self, inputs):