Skip to content

Instantly share code, notes, and snippets.

Modar M. Alfadly ModarTensai

Block or report user

Report or block ModarTensai

Hide content and notifications from this user.

Learn more about blocking users

Contact Support about this user’s behavior.

Learn more about reporting abuse

Report abuse
View GitHub Profile
@ModarTensai
ModarTensai / larc.py
Last active Jan 15, 2020
Layer-wise Adaptive Rate Control (LARC) in PyTorch. It is LARS with clipping support in addition to scaling.
View larc.py
class LARC:
"""Layer-wise Adaptive Rate Control.
LARC is LARS that supports clipping along with scaling:
https://arxiv.org/abs/1708.03888
This implementation is inspired by:
https://github.com/NVIDIA/apex/blob/master/apex/parallel/LARC.py
See also:
View expression.py
import torch
from torch import nn
from torch import functional as F
class Expression:
def __init__(self, out=None, **units):
self.out = out
self.terms = {}
self.coeffs = {}
@ModarTensai
ModarTensai / grid_search.py
Created Dec 4, 2019
Coarse-to-fine grid search
View grid_search.py
from argparse import Namespace
import torch
def grid_search(objective, *bounds, density=10, eps=1e-5, max_steps=None):
"""Perfrom coarse-to-fine grid search for the minimum objective.
>>> def f(x, y):
>>> x = x + 0.5
View covariance.py
def cov(m, rowvar=True, inplace=False, unbiased=True):
"""Estimate the covariance matrix for the given data.
Args:
m: Variables and observations data tensor (accept batches).
rowvar: Whether rows are variables and columns are observations.
inplace: Whether to subtract the variable means inplace or not.
unbiased: Whether to use the unbiased estimation or not.
Returns:
View unravel_index.py
import torch
def unravel_index(index, shape):
out = []
for dim in reversed(shape):
out.append(index % dim)
index = index // dim
return tuple(reversed(out))
View l0_sparsity.py
from math import log
import torch
from torch import nn
class L0Sparse(nn.Module):
def __init__(self, layer, init_sparsity=0.5, heat=2 / 3, stretch=0.1):
assert all(0 < x < 1 for x in [init_sparsity, heat, stretch])
super().__init__()
self.layer = layer
View gaussian_relu_moments.ipynb
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@ModarTensai
ModarTensai / matrix_square_root.py
Last active Jan 21, 2020
Compute the square root of a positive definite matrix with differentiable operations in pytorch (supports batching).
View matrix_square_root.py
"""Matrix square root: https://github.com/pytorch/pytorch/issues/25481"""
import torch
def sqrtm(matrix):
"""Compute the square root of a positive definite matrix."""
# s, v = matrix.symeig(eigenvectors=True)
_, s, v = matrix.svd()
good = s > s.max(-1, True).values * s.size(-1) * torch.finfo(s.dtype).eps
components = good.sum(-1)
@ModarTensai
ModarTensai / pca.py
Last active Sep 19, 2019
Principal Component Analysis (PCA) with pytorch and numpy
View pca.py
from collections import namedtuple
import numpy as np
class PCA:
def __init__(self, features, variance=None):
if variance is None:
pca = self.fit(features).pca
features = pca.projection_matrix
@ModarTensai
ModarTensai / named_tuples.py
Created Sep 18, 2019
Create namedtuple types for function outputs only once with nice type name.
View named_tuples.py
from collections import namedtuple
from types import MethodType, FunctionType
# using neither typing.NamedTuple nor dataclasses.dataclass
def named_tuple(function, field_names, *args, name=None, **kwargs):
"""Memoize namedtuple types for function outputs."""
if isinstance(function, MethodType):
function = function.__func__
assert isinstance(function, FunctionType)
You can’t perform that action at this time.