Skip to content

Instantly share code, notes, and snippets.

@rtqichen
Last active January 14, 2018 23:35
Show Gist options
  • Save rtqichen/e795d2f74b2cfa24a12a35b14d13ee49 to your computer and use it in GitHub Desktop.
Save rtqichen/e795d2f74b2cfa24a12a35b14d13ee49 to your computer and use it in GitHub Desktop.
numpy.float64 __mul__ torch.autograd.Variable

Some funny behavior in numpy.float64.__mul__ when being multiplied with pytorch Variables. Reproduction code:

import torch
import numpy as np
scalar = np.array([1.1])[0]  # is of type numpy.float64 rather than the primitive float
var = torch.autograd.Variable(torch.randn(2))
res1 = var * scalar
res2 = scalar * var
print(res2)

The verbatim result:

[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
 0.1395
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]






























 [[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
 1.0387
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]

The output res1 is a torch.autograd.Variable while res2 is a numpy.array and res2[i][0][0][0][0][0][0][0][0][0][0][0][0][0][0][0][0][0][0][0][0][0][0][0][0][0][0][0][0][0][0][0] (31 zeros) is res1[i].

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment