Skip to content

Instantly share code, notes, and snippets.

@simonkamronn
Created November 11, 2016 16:06
Show Gist options
  • Star 5 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save simonkamronn/8ac8be9010ebfa59c599d7915e9a7a47 to your computer and use it in GitHub Desktop.
Save simonkamronn/8ac8be9010ebfa59c599d7915e9a7a47 to your computer and use it in GitHub Desktop.
Hyperband for hyperparameter optimization
# https://people.eecs.berkeley.edu/~kjamieson/hyperband.html
# you need to write the following hooks for your custom problem
from problem import get_random_hyperparameter_configuration,run_then_return_val_loss
max_iter = 81 # maximum iterations/epochs per configuration
eta = 3 # defines downsampling rate (default=3)
logeta = lambda x: log(x)/log(eta)
s_max = int(logeta(max_iter)) # number of unique executions of Successive Halving (minus one)
B = (s_max+1)*max_iter # total number of iterations (without reuse) per execution of Succesive Halving (n,r)
#### Begin Finite Horizon Hyperband outlerloop. Repeat indefinetely.
for s in reversed(range(s_max+1)):
n = int(ceil(B/max_iter/(s+1)*eta**s)) # initial number of configurations
r = max_iter*eta**(-s) # initial number of iterations to run configurations for
#### Begin Finite Horizon Successive Halving with (n,r)
T = [ get_random_hyperparameter_configuration() for i in range(n) ]
for i in range(s+1):
# Run each of the n_i configs for r_i iterations and keep best n_i/eta
n_i = n*eta**(-i)
r_i = r*eta**(i)
val_losses = [ run_then_return_val_loss(num_iters=r_i,hyperparameters=t) for t in T ]
T = [ T[i] for i in argsort(val_losses)[0:int( n_i/eta )] ]
#### End Finite Horizon Successive Halving with (n,r)
@bkj
Copy link

bkj commented Dec 28, 2016

Clarifying question: on line 22, should run_then_return_val_loss

  • run r_i more iterations of model w/ parameters t or
  • run r_i iterations of model w/ parameters t from scratch

Also, do you have an end-to-end example of using this on some simple optimization problem? Presumably there's a little more bookkeeping-type code that goes along with this -- eg storing the best ts from each inner loop.

Thanks
~ Ben

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment