Skip to content

Instantly share code, notes, and snippets.

@hugoduncan
Created January 25, 2014 03:48
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save hugoduncan/8611549 to your computer and use it in GitHub Desktop.
Save hugoduncan/8611549 to your computer and use it in GitHub Desktop.
Criterium - reliable micro-benchmarks

Criterium - reliable micro-benchmarks

abstract

We’ve all timed a clojure function using clojure’s `time` macro, and then wondered why we don’t get stable results. We’ve seen Rich show off order of magnitude speed differences in new versions of clojure using the time macro. But what happens when you want reliable benchmarks, that can track small changes in timings? `time` falls short.

We’ll talk about the potential issues with micro benchmarks, especially on the JVM, where JIT and garbage collection can lead you astray, and show how Criterium gives you benchmarking that reliably avoids many of the pitfalls, and provides timing statistics, averages and percentiles, that you can trust.

Criterium is used by developers of many popular libraries to benchmark and optimise their projects, and even to track the performance of clojure itself. Come find out why.

overview

The talk will cover basic criterium usage, and provide an explanation of the statistics that it calculates.

  • mean
  • percentiles
  • reporting of outliers

It will detail some of issues criterium takes care of:

  • warm up to ensure JIT has completed before measuring
  • timer resolution
  • garbage pending collection after runs
  • ensuring results are used, so expressions aren’t removed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment