Skip to content

Instantly share code, notes, and snippets.

@ttpro1995
Created May 3, 2017 11:48
Show Gist options
  • Save ttpro1995/c97a6a5b5f412e08020edbbf916c4a4e to your computer and use it in GitHub Desktop.
Save ttpro1995/c97a6a5b5f412e08020edbbf916c4a4e to your computer and use it in GitHub Desktop.
Pytorch train without optim
import torch
import torch.nn as nn
from torch.autograd import Variable as Var
# define the data
dt = torch.Tensor([[1,0],[0,1], [1,1], [0,0]])
labels = torch.LongTensor([1, 0, 1, 0])
learning_rate = 0.01
model = nn.Linear(2,2)
criterion = nn.CrossEntropyLoss()
dt = Var(dt)
labels = Var(labels)
for e in range(0, 1000):
model.zero_grad()
output = model(dt)
loss = criterion(output, labels)
loss.backward()
for f in model.parameters():
f.data.sub_(f.grad.data * learning_rate)
print (loss)
# test
output = model(dt)
val, pred = torch.max(output, 1)
print (pred)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment