Skip to content

Instantly share code, notes, and snippets.

Modar M. Alfadly ModarTensai

View GitHub Profile
@ModarTensai
ModarTensai / color_toning.py
Created Mar 16, 2020
A color-toning transform made to match TorchVision implementations. Inspired by https://www.pyimagesearch.com/2014/06/30/super-fast-color-transfer-images/
View color_toning.py
import numpy as np
from PIL import Image
from skimage.color import lab2rgb, rgb2lab
class RandomColorToning:
def __init__(self, scale_mean, scale_std, shift_mean, shift_std):
self.scale_mean = scale_mean
self.scale_std = scale_std
@ModarTensai
ModarTensai / autoencoder.py
Last active Mar 17, 2020
PyTorch fully-convolutional auto-encoder for any arbitrary image sizes (including rectangles). Can, also, be used for a DCGAN.
View autoencoder.py
from torch import nn
class Generator(nn.Module):
def __init__(self, input_dim, image_shape, memory):
super().__init__()
self.memory = memory
self.input_dim = input_dim
self.image_shape = image_shape
@ModarTensai
ModarTensai / pytorch_experiment.py
Last active Mar 1, 2020
Modular experiment class to train a PyTorch module. It can be easily inherited or used with mixins to extend its functionality. We us it to train VGG on Cifar10.
View pytorch_experiment.py
import json
import math
from argparse import ArgumentParser
from contextlib import contextmanager
from pathlib import Path
import torch
import torchvision.transforms as T
from torch import nn
from torch.optim import lr_scheduler
@ModarTensai
ModarTensai / larc.py
Last active Jan 15, 2020
Layer-wise Adaptive Rate Control (LARC) in PyTorch. It is LARS with clipping support in addition to scaling.
View larc.py
class LARC:
"""Layer-wise Adaptive Rate Control.
LARC is LARS that supports clipping along with scaling:
https://arxiv.org/abs/1708.03888
This implementation is inspired by:
https://github.com/NVIDIA/apex/blob/master/apex/parallel/LARC.py
See also:
View expression.py
import torch
from torch import nn
from torch import functional as F
class Expression:
def __init__(self, out=None, **units):
self.out = out
self.terms = {}
self.coeffs = {}
@ModarTensai
ModarTensai / grid_search.py
Created Dec 4, 2019
Coarse-to-fine grid search
View grid_search.py
from argparse import Namespace
import torch
def grid_search(objective, *bounds, density=10, eps=1e-5, max_steps=None):
"""Perfrom coarse-to-fine grid search for the minimum objective.
>>> def f(x, y):
>>> x = x + 0.5
View covariance.py
def cov(m, rowvar=True, inplace=False, unbiased=True):
"""Estimate the covariance matrix for the given data.
Args:
m: Variables and observations data tensor (accept batches).
rowvar: Whether rows are variables and columns are observations.
inplace: Whether to subtract the variable means inplace or not.
unbiased: Whether to use the unbiased estimation or not.
Returns:
View unravel_index.py
import torch
def unravel_index(index, shape):
out = []
for dim in reversed(shape):
out.append(index % dim)
index = index // dim
return tuple(reversed(out))
View l0_sparsity.py
from math import log
import torch
from torch import nn
class L0Sparse(nn.Module):
def __init__(self, layer, init_sparsity=0.5, heat=2 / 3, stretch=0.1):
assert all(0 < x < 1 for x in [init_sparsity, heat, stretch])
super().__init__()
self.layer = layer
View gaussian_relu_moments.ipynb
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
You can’t perform that action at this time.