Skip to content

Instantly share code, notes, and snippets.

@a-z-e-r-i-l-a
Last active June 8, 2019 11:24
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save a-z-e-r-i-l-a/e98a3bf238031410865167ef3011ed5b to your computer and use it in GitHub Desktop.
Save a-z-e-r-i-l-a/e98a3bf238031410865167ef3011ed5b to your computer and use it in GitHub Desktop.
I am looking for the most efficient way to get the Jacobian of a function through Pytorch and have so far come up with the following solutions. Since there seem to be not a big difference between using a loop in the first solution than the second one, I wanted to ask if there might still be be a faster way to calculate a Jacobian in pytorch.
def func(X):
return torch.stack((
X.pow(2).sum(1),
J = torch.zeros(3, int(1e5))
for i in range(3):
J[i] = grad(Y[0][i], X, create_graph=True, retain_graph=True, allow_unused=True)[0]
print(time()-t)
Output: 0.002 s
# Output: 0.002 s
# Solution 2:
def Jacobian(f,X):
X_batch = Variable(X.repeat(3,1), requires_grad=True)
f(X_batch).backward(torch.eye(3).cuda(), retain_graph=True)
return X_batch.grad
t = time()
J2 = Jacobian(func,X)
print(time()-t)
Output: 0.001 s
@a-z-e-r-i-l-a
Copy link
Author

Another question is then also about what might be the most efficient way to calculate the Hessian. Finally, I would like to see if something like this can be done easier or more efficient in TensorFlow.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment