Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
import torch
def jacobian(y, x, create_graph=False):
jac = []
flat_y = y.reshape(-1)
grad_y = torch.zeros_like(flat_y)
for i in range(len(flat_y)):
grad_y[i] = 1.
grad_x, = torch.autograd.grad(flat_y, x, grad_y, retain_graph=True, create_graph=create_graph)
jac.append(grad_x.reshape(x.shape))
grad_y[i] = 0.
return torch.stack(jac).reshape(y.shape + x.shape)
def hessian(y, x):
return jacobian(jacobian(y, x, create_graph=True), x)
def f(x):
return x * x * torch.arange(4, dtype=torch.float)
x = torch.ones(4, requires_grad=True)
print(jacobian(f(x), x))
print(hessian(f(x), x))
@KhalilElkhalil

This comment has been minimized.

Copy link

@KhalilElkhalil KhalilElkhalil commented Jan 17, 2020

Dear Adam,

Is there a way to compute the Laplacian of a function f w.r.t a tensor x with dimension bxD (b: batch size, D: data dimension)? We need to compute $\sum_{i=1}^D \partial^2 f(x) / \partial x_i^2$ in an efficient way. Computing the Hessian and taking the trace seems to compute unnecessary off-diagonals which are irrelevant to the Laplacian.

Thanks a lot!

@apaszke

This comment has been minimized.

Copy link
Owner Author

@apaszke apaszke commented Jan 25, 2020

I don't think there's any other way with the current AD methods. You don't have to keep the whole Hessian in memory of course (you can throw away a row of the Hessian once you've picked out the element you're interested in), but you'll still need to compute each row, just like the hessian function does.

@KhalilElkhalil

This comment has been minimized.

Copy link

@KhalilElkhalil KhalilElkhalil commented Jan 25, 2020

Thanks Adam!

@slerman12

This comment has been minimized.

Copy link

@slerman12 slerman12 commented Apr 18, 2020

Is there a reason why you use grad_y instead of just indexing flat_y[i] in the autograd?

@jalane76

This comment has been minimized.

Copy link

@jalane76 jalane76 commented Apr 19, 2020

Hello, I am relatively new to PyTorch and came across your Hessian function. It is much more elegant than some Hessian code from an academic paper that I am trying to reproduce. I've put together a toy example, but keep getting the error

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [2]] is at version 4; expected version 3 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

I've been scouring the docs and googling, but for the life of me I can't figure out what I'm doing wrong. Any help you could offer would be greatly appreciated!

Here is my code:

import torch
import torch.autograd as autograd
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import numpy as np

torch.set_printoptions(precision=20, linewidth=180)

def jacobian(y, x, create_graph=False):
    jac = []                             
    flat_y = y.reshape(-1)     
    grad_y = torch.zeros_like(flat_y)
    for i in range(len(flat_y)):         
        grad_y[i] = 1.
        grad_x, = torch.autograd.grad(flat_y, x, grad_y, retain_graph=True, create_graph=create_graph)
        jac.append(grad_x.reshape(x.shape))
        grad_y[i] = 0.
    return torch.stack(jac).reshape(y.shape + x.shape)           
                                                                                                      
def hessian(y, x):  
    return jacobian(jacobian(y, x, create_graph=True), x)                                             
                                                                                                      
def f(x):                                                                                             
    return x * x                                            

np.random.seed(435537698)

num_dims = 2
num_samples = 3

X = [np.random.uniform(size=num_dims) for i in range(num_samples)]

mean = torch.Tensor(np.mean(X, axis=0))
mean.requires_grad = True

cov = torch.Tensor(np.cov(X, rowvar=False))

with autograd.detect_anomaly():
    hessian_matrices = hessian(f(mean), mean)
    print('hessian: \n{}\n\n'.format(hessian_matrices))

The output with anomaly detection turned on is here:


RuntimeError Traceback (most recent call last)
in ()
67
68 with autograd.detect_anomaly():
---> 69 hessian_matrices = hessian(f(mean), mean)
70 print('hessian: \n{}\n\n'.format(hessian_matrices))

2 frames
in hessian(y, x)
45 print('--> hessian()')
46 j = jacobian(y, x, create_graph=True)
---> 47 return jacobian(j, x)
48
49 def f(x):

in jacobian(y, x, create_graph)
28 print('\tgrad_y: \n\t{}\n'.format(grad_y))
29
---> 30 grad_x, = torch.autograd.grad(flat_y, x, grad_y, retain_graph=True, create_graph=create_graph)
31 print('\tgrad_x: \n\t{}\n')
32

/usr/local/lib/python3.6/dist-packages/torch/autograd/init.py in grad(outputs, inputs, grad_outputs, retain_graph, create_graph, only_inputs, allow_unused)
155 return Variable._execution_engine.run_backward(
156 outputs, grad_outputs, retain_graph, create_graph,
--> 157 inputs, allow_unused)
158
159

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [2]] is at version 4; expected version 3 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

Finally, I'm running my code in a Google Colab notebook with PyTorch 1.4 if that makes a difference.

Thanks!

@jalane76

This comment has been minimized.

Copy link

@jalane76 jalane76 commented Apr 20, 2020

I did manage to get the code to run now. I made a "simplification" that broke it.

Your function f is:

def f(x):                                                                                             
    return x * x * torch.arange(4, dtype=torch.float)  

While mine was:

def f(x):                                                                                             
    return x * x  

I've since fixed it to:

def f(x):                                                                                             
    return x * x  * torch.ones_like(x)

and it works like a charm. @apaszke any idea why that is the case?

@el-hult

This comment has been minimized.

Copy link

@el-hult el-hult commented Apr 20, 2020

I did manage to get the code to run now. I made a "simplification" that broke it.

Your function f is:

def f(x):                                                                                             
    return x * x * torch.arange(4, dtype=torch.float)  

While mine was:

def f(x):                                                                                             
    return x * x  

I've since fixed it to:

def f(x):                                                                                             
    return x * x  * torch.ones_like(x)

and it works like a charm. @apaszke any idea why that is the case?

you can switch torch.ones_like(x) to 1 and it still works...

@Ronnypetson

This comment has been minimized.

Copy link

@Ronnypetson Ronnypetson commented Jun 26, 2020

Hello Adam! How could I give credit to you if I use this code? Can it be a doc-string in documentation, paper citation or something?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.