Skip to content

Instantly share code, notes, and snippets.

@Mehdi-Amine
Last active June 2, 2020 23:06
Show Gist options
  • Save Mehdi-Amine/98baeab62f984b2814fca87c4b2a9ccd to your computer and use it in GitHub Desktop.
Save Mehdi-Amine/98baeab62f984b2814fca87c4b2a9ccd to your computer and use it in GitHub Desktop.
differentiation of ReLU
import torch
import torch.nn.functional as F
#----------- Implementing the math -----------#
def relu_prime(z):
return torch.where(z>0, torch.tensor(1.), torch.tensor(0.))
z = torch.tensor([[-0.2], [0.6]], requires_grad=True)
relu_p = relu_prime(z)
#----------- Using Pytorch autograd -----------#
torch_relu = F.relu(z)
torch_relu.backward(torch.tensor([[1.], [1.]]))
#----------- Comparing outputs -----------#
print(f"Pytorch ReLU': \n{z.grad} \nOur ReLU': \n{relu_p}")
'''
Out:
Pytorch ReLU':
tensor([[0.],[1.]])
Our ReLU':
tensor([[0.],[1.]])
'''
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment