Skip to content

Instantly share code, notes, and snippets.

@Yatoom
Last active May 16, 2018 19:10
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save Yatoom/cee63e3e1406c70f3030eb041ffbd16b to your computer and use it in GitHub Desktop.
Save Yatoom/cee63e3e1406c70f3030eb041ffbd16b to your computer and use it in GitHub Desktop.
Benchmark on openml's CC18 using TPOT and AutoSklearn
# /home/jhoof/python/python36/bin/python3 bench.py
import openml
from arbok.bench import Benchmark
# We create a benchmark setup where we specify the headers, the interpreter we
# want to use, the directory to where we store the jobs (.sh-files), and we give
# it the config-file we created earlier.
bench = Benchmark(
headers="#PBS -lnodes=1:cpu3\n#PBS -lwalltime=1:30:00",
python_interpreter="/home/jhoof/python/python36/bin/python3", # Path to interpreter
root="/home/jhoof/benchmark/",
jobs_dir="jobs",
config_file="config.json",
log_file="log.json"
)
# Config file
config_file = bench.create_config_file(
# Wrapper parameters
wrapper={"refit": True, "verbose": False, "retry_on_error": True},
# TPOT parameters
tpot={
"max_time_mins": 6, # Max total time in minutes
"max_eval_time_mins": 1 # Max time per candidate in minutes
},
# Autosklearn parameters
autosklearn={
"time_left_for_this_task": 360, # Max total time in seconds
"per_run_time_limit": 60 # Max time per candidate in seconds
}
)
# Next, we load the tasks we want to benchmark on from OpenML.
# In this case, we load a list of task id's from study 99.
tasks = openml.study.get_study(99).tasks
# Next, we create jobs for both tpot and autosklearn.
bench.create_jobs(tasks, classifiers=["tpot", "autosklearn"])
# And finally, we submit the jobs using qsub
bench.submit_jobs()
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment