Skip to content

Instantly share code, notes, and snippets.

@tuelwer
Last active June 17, 2024 21:55
Show Gist options
  • Save tuelwer/0b52817e9b6251d940fd8e2921ec5e20 to your computer and use it in GitHub Desktop.
Save tuelwer/0b52817e9b6251d940fd8e2921ec5e20 to your computer and use it in GitHub Desktop.
pytorch-L-BFGS-example
import torch
import torch.optim as optim
import matplotlib.pyplot as plt
# 2d Rosenbrock function
def f(x):
return (1 - x[0])**2 + 100 * (x[1] - x[0]**2)**2
# Gradient descent
x_gd = 10*torch.ones(2, 1)
x_gd.requires_grad = True
gd = optim.SGD([x_gd], lr=1e-5)
history_gd = []
for i in range(100):
gd.zero_grad()
objective = f(x_gd)
objective.backward()
gd.step()
history_gd.append(objective.item())
# L-BFGS
def closure():
lbfgs.zero_grad()
objective = f(x_lbfgs)
objective.backward()
return objective
x_lbfgs = 10*torch.ones(2, 1)
x_lbfgs.requires_grad = True
lbfgs = optim.LBFGS([x_lbfgs],
history_size=10,
max_iter=4,
line_search_fn="strong_wolfe")
history_lbfgs = []
for i in range(100):
history_lbfgs.append(f(x_lbfgs).item())
lbfgs.step(closure)
# Plotting
plt.semilogy(history_gd, label='GD')
plt.semilogy(history_lbfgs, label='L-BFGS')
plt.legend()
plt.show()
@tuelwer
Copy link
Author

tuelwer commented Oct 18, 2022

In the docs it says: "The closure should clear the gradients, compute the loss, and return it." So calling optimizer.zero_grad() might be a good idea here. However, when I clear the gradients in the closure the optimizer does not make and progress. Also, I am unsure whether calling optimizer.backward() is necessary. (In the docs example it is called from within the closure.) As far as I understand, the closure is needed to perform the line-search which only needs to reevaluate the objective.

This matter is also discussed in this issue.

@tuelwer
Copy link
Author

tuelwer commented Oct 18, 2022

Correction: seems like calculating gradients for checking the second Wolfe-condition is necessary. I changed the gist. Thank you for pointing this out!

@davidaknowles
Copy link

Thanks for sharing this. The docs are a bit unclear but I think if we are doing full dataset training (as opposed to minibatching) we can just set max_iter sufficiently high and then the for loop isn't needed (i.e., just call lbfgs.step once).

@maqifrnswa
Copy link

Thanks for this gist

Thanks for sharing this. The docs are a bit unclear but I think if we are doing full dataset training (as opposed to minibatching) we can just set max_iter sufficiently high and then the for loop isn't needed (i.e., just call lbfgs.step once).

I think so, but maybe lbfgs.step might need to be called twice to work. I'm using it on full dataset training, and running it just once doesn't seem to update anything. There's a couple if state['n_iter'] == 1: sections in lbfgs.step that initialize variables that are later computed when n_iter is greater than one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment