Skip to content

Instantly share code, notes, and snippets.

@Mehdi-Amine
Last active June 3, 2020 21:59
Show Gist options
  • Save Mehdi-Amine/f79f70d22d1c64af53cd9cc6e9dcdbfb to your computer and use it in GitHub Desktop.
Save Mehdi-Amine/f79f70d22d1c64af53cd9cc6e9dcdbfb to your computer and use it in GitHub Desktop.
Implementing cross-entropy
import torch
import torch.nn.functional as F
#----------- Implementing the math -----------#
def cross_entropy(activations, labels):
return - torch.log(activations[range(labels.shape[0]), labels]).mean()
zs = torch.tensor([[0.1, 0.4, 0.2], [0.3, 0.9, 0.6]]) # The values of 3 output neurons for 2 instances
activations = softmax(zs) # = [[0.2894, 0.3907, 0.3199],[0.2397, 0.4368, 0.3236]]
y = torch.tensor([2,0]) # equivalent to [[0,0,1],[1,0,0]]
ce = cross_entropy(activations, y)
#----------- Using Pytorch autograd -----------#
torch_ce = F.cross_entropy(zs, y)
#----------- Comparing outputs -----------#
print(f"Pytorch cross-entropy: {torch_ce} \nOur cross-entropy: {ce}")
'''
Out:
Pytorch cross-entropy: 1.28411
Our cross-entropy: 1.28411
'''
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment