Skip to content

Instantly share code, notes, and snippets.

@rtqichen
rtqichen / pytorch_x_numpy_funky_behavior.md
Last active January 14, 2018 23:35
numpy.float64 __mul__ torch.autograd.Variable

Some funny behavior in numpy.float64.__mul__ when being multiplied with pytorch Variables. Reproduction code:

import torch
import numpy as np
scalar = np.array([1.1])[0]  # is of type numpy.float64 rather than the primitive float
var = torch.autograd.Variable(torch.randn(2))
res1 = var * scalar
res2 = scalar * var
print(res2)
@rtqichen
rtqichen / pytorch_weight_norm.py
Last active May 11, 2023 06:58
Pytorch weight normalization - works for all nn.Module (probably)
## Weight norm is now added to pytorch as a pre-hook, so use that instead :)
import torch
import torch.nn as nn
from torch.nn import Parameter
from functools import wraps
class WeightNorm(nn.Module):
append_g = '_g'
append_v = '_v'