We’ve all timed a clojure function using clojure’s `time` macro, and then wondered why we don’t get stable results. We’ve seen Rich show off order of magnitude speed differences in new versions of clojure using the time macro. But what happens when you want reliable benchmarks, that can track small changes in timings? `time` falls short.
We’ll talk about the potential issues with micro benchmarks, especially on the JVM, where JIT and garbage collection can lead you astray, and show how Criterium gives you benchmarking that reliably avoids many of the pitfalls, and provides timing statistics, averages and percentiles, that you can trust.
Criterium is used by developers of many popular libraries to benchmark and optimise their projects, and even to track the performance of clojure itself. Come find out why.
The talk will cover basic criterium usage, and provide an explanation of the statistics that it calculates.
- mean
- percentiles
- reporting of outliers
It will detail some of issues criterium takes care of:
- warm up to ensure JIT has completed before measuring
- timer resolution
- garbage pending collection after runs
- ensuring results are used, so expressions aren’t removed