Skip to content

Instantly share code, notes, and snippets.

@juliensimon
Created December 20, 2019 23:21
Show Gist options
  • Save juliensimon/d30307dd71c68f11c6ece0aa86315b4f to your computer and use it in GitHub Desktop.
Save juliensimon/d30307dd71c68f11c6ece0aa86315b4f to your computer and use it in GitHub Desktop.
DGL part 6
optimizer = torch.optim.Adam(net.parameters(), lr=0.001)
all_preds = []
epochs = 50
for epoch in range(epochs):
preds = net(G, inputs)
all_preds.append(preds)
# we only compute loss for labeled nodes
loss = F.cross_entropy(preds[labeled_nodes], labels)
# PyTorch accumulates gradients by default, we need to zero them
optimizer.zero_grad()
loss.backward()
optimizer.step()
print('Epoch %d | Loss: %.4f' % (epoch, loss.item()))
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment