Skip to content

Instantly share code, notes, and snippets.

@AFAgarap
Created January 7, 2020 12:55
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save AFAgarap/16bd19e4dbd8edef942a6dc42fde387e to your computer and use it in GitHub Desktop.
Save AFAgarap/16bd19e4dbd8edef942a6dc42fde387e to your computer and use it in GitHub Desktop.
Training a model in PyTorch.
for epoch in range(epochs):
loss = 0
for batch_features, _ in train_loader:
# reshape mini-batch data to [N, 784] matrix
# load it to the active device
batch_features = batch_features.view(-1, 784).to(device)
# reset the gradients back to zero
# PyTorch accumulates gradients on subsequent backward passes
optimizer.zero_grad()
# compute reconstructions
outputs = model(batch_features)
# compute training reconstruction loss
train_loss = criterion(outputs, batch_features)
# compute accumulated gradients
train_loss.backward()
# perform parameter update based on current gradients
optimizer.step()
# add the mini-batch training loss to epoch loss
loss += train_loss.item()
# compute the epoch training loss
loss = loss / len(train_loader)
# display the epoch training loss
print("epoch : {}/{}, loss = {:.6f}".format(epoch + 1, epochs, loss))
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment