Skip to content

Instantly share code, notes, and snippets.

@jamestalmage
Last active July 19, 2016 19:46
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save jamestalmage/86cd0cbf719bb9ffc971 to your computer and use it in GitHub Desktop.
Save jamestalmage/86cd0cbf719bb9ffc971 to your computer and use it in GitHub Desktop.
2016.01.26 - Current State of AVA performance

Full Disclaimer: I am a member of the AVA team

Between babel-plugin-espower#12 and ava#466 we are starting to see some pretty good performance compared to mocha:

I used emoji-aware for the benchmarks, it's a real life test suite with 4,300+ tests. It's a good candidate for benchmarking the efficiency of AVA's test Runner because:

  1. All the tests are synchronous. AVA is at an incredible advantage when it comes to async tests. If your tests need to interact with the disk or network, then AVA is almost guaranteed to be faster - simply because it allows concurrent execution of tests.
  2. 1 or 2 simple assertions per test. (The goal of this exercise was to measure test runner performance, not assertion libraries).
  3. Multiple test files. This allows AVA to flex it's process forking muscle.
  4. Lots of tests per file. Even with the multi-process advantage, it is important the AVA runner is efficient (emoji-aware is the test suite that helped identify the problem fixed in #461).

Here are the results:

AVA (master as of Jan 27th 2016):

ava --verbose  4.83s user 0.54s system 396% cpu 1.353 total

Mocha (using Babel to instrument for power-assert):

Note: AVA provides power-assert support out of the box.

mocha  2.52s user 0.33s system 99% cpu 2.878 total

Mocha (no power-assert):

mocha  1.14s user 0.24s system 92% cpu 1.484 total

There is a performance penalty for power-assert. However, we are talking about fractions of a microsend per test (remember this is a test suite with 4,300+ tests).

The cost of forking:

You will note that, despite the fact that AVA is faster overall. Mocha is using far less processor time. I decided to figure out how much of that came from forking additional processes. So I created a test suite with 8 files (the same number of files in the test suites above). Each had a single no-op test.

ava empty-tests/*.js  2.35s user 0.36s system 350% cpu 0.773 total

This seems to indicate that there is a certain penalty for forking processes. Comparing the "mocha with power-assert" benchmark to AVA, it seems the entire difference in CPU utilization can be explained by the overhead of forking additional processes. This is more about the size of AVA's dependency graph and require times than the performance of our code. To that end I think #369 and reducing require time in general are going to be the place to look for improved performance. When AVA enables watch support, we should be able to mitigate most of these costs by pre-warming new processes before they are needed.

My takeaways:

  1. The AVA runner is pretty efficient, even for sync tests.
  2. Spawning multiple processes is a huge win for AVA, but we could improve things further by reducing require times.
  3. I am excited to get watch support working and pre-warm child processes while we wait for file changes. I think things will really scream at that point.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment