Skip to content

Instantly share code, notes, and snippets.

@thouis
Created December 15, 2015 16:35
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save thouis/fd50b8c28efcc078765e to your computer and use it in GitHub Desktop.
Save thouis/fd50b8c28efcc078765e to your computer and use it in GitHub Desktop.
local grad = require 'autograd'
local nn = require 'nn'
local W1 = torch.FloatTensor(1, 1, 10, 10):normal()
local pooler = nn.SpatialMaxPooling(2, 2, 2, 2)
local unpooler = nn.SpatialMaxUnpooling(pooler)
-- not sure why these lines are necessary.
pooler.indices = torch.FloatTensor()
pooler.output = torch.FloatTensor()
unpooler.output = torch.FloatTensor()
local testunpool = function(inputs)
local pooled = pooler(inputs.W1)
local unpooled = unpooler(pooled)
return torch.sum(unpooled)
end
print(testunpool({W1=W1}))
df = grad(testunpool)
print(df({W1=W1}))
@fmassa
Copy link

fmassa commented Dec 15, 2015

If you are using float tensors, it's better to cast the modules to :float() instead of having to manually hack it

@thouis
Copy link
Author

thouis commented Jan 7, 2016

@fmassa, Sorry, I'm not a torch expert. Can you give me an example as to what you mean?

@alexbw
Copy link

alexbw commented Jan 12, 2016

I think he means pooler:float() instead of assigning float types to its member tensors, like pooler.output = torch.FloatTensor()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment