I want to make a less shitty, more comprehensive set of programming language benchmarks.
Obligatory "Microbenchmarks are not necessarily indicative of real world performance", but I am super unimpressed with The Benchmark Game and am at least convinced I could do what it does better.
Tests should primarily test the speed of code execution, and the efficiency of the garbage collector (if applicable).
Tests should be written in a fairly idiomatic fashion. Test code should be written with performance in mind, but extreme micro-optimization should be avoided. Examples:
- Inline assembly (this should be a no-brainer).
- Using CPU intrinsics.
- Targeting extreme subsets of the language that you know the JIT will handle better (a la ASM.js).
Abusing libraries in test code is frowned upon. If the test is to do X and your language's standard library can do X, you can't just use the standard library function.
Explicitly multi-threaded code is probably not allowed (for now).
Again: Tests should primarily test the speed of code execution, and the efficiency of the garbage collector (if applicable).
Tests will likely be re-used from other benchmarks. Code from these benchmarks may be re-used if the license permits, and if the code meets the above guidelines.
Rosetta Code could possibly be used as a resource.
- Null test: "Hello World" style program, solely to test the static memory overhead and startup cost of the runtime.