- Retry Job (today is a completely new job and is not linked to retried one)
- Track job status pending/running/finished/failed
- Save/Download docker-compose file from job
- Be able to search repository
- Add max retries to job claiming to avoid the runner to always claim a job that fails
- Detect performance regressions and notify repositories
- Design the benchmarks page
- Design the results page
- Add about page
- Add a contact page
- Add help/docs page
- Document deploy steps
- Automate deploy (bash, ansible, chef?)
- Continuous delievery (how can we achieve this?)
- We will mantain a test.elixirbench.org?
- Define how/where the docs of featurues will be put (wiki, page in the website, other?)
- Write more about ElixirBench goals and features (see travis-ci)
- Write about config.yml settings
- Write about runners and infrastructure
- Write about benchmarks and examples
- Write about Contrib
- Define the issues/development workflow
- Define the release/version workflow
- How can we wrap benchee features, like standart benchmarks_output_path, etc?
- How the runners will work? Each project setup its runners or we will provide runners for them?
- How can we deal with the difference of CPU and Memory configurations of the runners? Does it make sense to group benchmarks by resources?
- Will we allow all projects to use the service or will we have a white list of projects like ruby bench?
- Gigalixir
- Heroku
- Digital Ocean
- A project decides to use our service
- We offer a free tier with a dual core, 2GB of ram to run the benchmarks without external deps or with limited jobs per day
- We offer a custom plan proportional to cpu and memory with support for external docker deps
- We offer consultancy for writing benchmarks for their projects, how to write, interpret, improve, based on their project lifecycle