Skip to content

Instantly share code, notes, and snippets.

@andreaskoepf
Created September 16, 2015 01:03
Show Gist options
  • Save andreaskoepf/089dd5ad06837f06e322 to your computer and use it in GitHub Desktop.
Save andreaskoepf/089dd5ad06837f06e322 to your computer and use it in GitHub Desktop.
mini bottleneck auto-encoder weight tying demo
require 'nn'
-- mini bottleneck auto-encoder weight tying demo
net = nn.Sequential()
net:add(nn.Linear(8, 8))
net:add(nn.PReLU())
net:add(nn.Linear(8, 3))
net:add(nn.PReLU())
net:add(nn.Linear(3, 8))
net:add(nn.PReLU())
net:add(nn.Linear(8, 8))
-- tie weights of layer 1 & 7
net:get(7).weight:set(net:get(1).weight:t())
net:get(7).gradWeight:set(net:get(1).gradWeight:t())
-- tie weights of layer 3 & 5
net:get(5).weight:set(net:get(3).weight:t())
net:get(5).gradWeight:set(net:get(3).gradWeight:t())
weights, gradient = net:getParameters()
mse = nn.MSECriterion()
batch = torch.eye(8)
for i=1,2000 do
gradient:zero()
net:forward(batch)
local loss = mse:forward(net.output, batch)
print(string.format('%d: loss: %f', i, loss))
net:backward(batch, mse:backward(net.output, batch))
weights:add(-0.5, gradient)
end
print(net:get(3).weight)
print(net:get(5).weight)
print(net.output)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment