Skip to content

Instantly share code, notes, and snippets.

@rajy4683
Created February 8, 2021 16:32
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save rajy4683/093772f266ecaabfd84342d35868ba04 to your computer and use it in GitHub Desktop.
Save rajy4683/093772f266ecaabfd84342d35868ba04 to your computer and use it in GitHub Desktop.
N_EPOCHS = 10
CLIP = 0.1
best_valid_loss = float('inf')
### time keeping utility
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
### Let's Train!
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss = train(model, train_iterator, optimizer, criterion, CLIP)
valid_loss = evaluate(model, valid_iterator, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'tut5-model.pt')
print(f'Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss} | Train PPL: {math.exp(train_loss)}')
print(f'\t Val. Loss: {valid_loss} | Val. PPL: {math.exp(valid_loss)}')
### Training sample outputs
"""
Epoch: 01 | Time: 0m 39s
Train Loss: 4.386307164961021 | Train PPL: 80.34317638743143
Val. Loss: 3.2653768062591553 | Val. PPL: 26.189977679307926
Epoch: 02 | Time: 0m 39s
Train Loss: 3.1603193503644498 | Train PPL: 23.578124409515876
Val. Loss: 2.4508313834667206 | Val. PPL: 11.597985085126238
Epoch: 03 | Time: 0m 39s
Train Loss: 2.6523940573704925 | Train PPL: 14.187964820317568
Val. Loss: 2.1747743040323257 | Val. PPL: 8.800198728667743
Epoch: 04 | Time: 0m 39s
Train Loss: 2.4002296756542725 | Train PPL: 11.025708426652518
Val. Loss: 2.011881798505783 | Val. PPL: 7.477375029465812
Epoch: 05 | Time: 0m 39s
Train Loss: 2.2325309357454075 | Train PPL: 9.323433253893493
Val. Loss: 1.937463030219078 | Val. PPL: 6.941119204797184
Epoch: 06 | Time: 0m 39s
Train Loss: 2.1109495793145134 | Train PPL: 8.256077363832201
Val. Loss: 1.8629019558429718 | Val. PPL: 6.4424052463304635
Epoch: 07 | Time: 0m 39s
Train Loss: 2.012909343064094 | Train PPL: 7.48506231432527
Val. Loss: 1.8349394798278809 | Val. PPL: 6.264754989912277
Epoch: 08 | Time: 0m 39s
Train Loss: 1.9355599244785728 | Train PPL: 6.927922082735504
Val. Loss: 1.7861778289079666 | Val. PPL: 5.966603448553464
Epoch: 09 | Time: 0m 39s
Train Loss: 1.8734823544119947 | Train PPL: 6.510930333696693
Val. Loss: 1.7783344984054565 | Val. PPL: 5.919988453107751
Epoch: 10 | Time: 0m 38s
Train Loss: 1.8167370095652104 | Train PPL: 6.151752555306682
Val. Loss: 1.7476196140050888 | Val. PPL: 5.740920790933693
"""
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment