Skip to content

Instantly share code, notes, and snippets.

@madagra
Last active December 25, 2022 11:27
Show Gist options
  • Save madagra/30e6b2e233b5b5f01a1ede45c7f71a7e to your computer and use it in GitHub Desktop.
Save madagra/30e6b2e233b5b5f01a1ede45c7f71a7e to your computer and use it in GitHub Desktop.
# choose the configuration
batch_size = 30 # number of colocation points sampled in the domain
num_iter = 100 # maximum number of iterations
learning_rate = 1e-1 # learning rate
domain = (-5.0, 5.0) # logistic equation domain
# choose optimizer with functional API using functorch
optimizer = torchopt.FuncOptimizer(torchopt.adam(lr=learning_rate))
# train the model
for i in range(num_iter):
# sample colocations points in the domain randomly at each epoch
x = torch.FloatTensor(batch_size).uniform_(domain[0], domain[1])
# update the parameters using the functional API
loss = loss_fn(params, x)
params = optimizer.step(loss, params)
print(f"Iteration {i} with loss {float(loss)}")
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment