If you want to properly benchmark speed of Ruby scripts, then you have to compensate for factors that affect execution time:
- Potential overhead of Ruby version manager
- Startup time of
- System cache that can affect Ruby interpreter execution on first run.
- Potential extra requires in
- RubyGems load time
- Load time for individual gems at
For instance, a simple Ruby
require that loads a gem can vary wildly in speed
depending on how many system gems the user has installed on their computer.
To compensate for the above factors:
- Use the absolute path to the
rubyinterpreter overhead and substract it from other measurement.
- Prime the system cache by exercising the script prior to measurement.
- Reset RUBYOPT and RUBYLIB to known values.
- It's best to avoid RubyGems altogether with
- Because RubyGems is disabled, simply add the few libraries you need (fetched
by Bundler) to
bench script here implements all of these solutions.
On my MBP, Ruby 2.1.5, the performance of Minitest vs. RSpec is:
measuring minitest.rb mean: 0.084 stdev: 0.003 measuring rspec.rb mean: 0.081 stdev: 0.004
RSpec is rougly 3ms faster— a completely negligible difference.
An absolute number is much more informative than percentages here. It doesn't make sense to compare percentages when you're measuring speed gain if you don't know the absolute number.
But it doesn't matter.
This benchmark is a poor comparison of performance of these testing frameworks because it's essentially a Hello World program. These test scripts aren't representative of the actual way how these testing frameworks would get used. Usually in tests, the startup time of a library isn't so much of a problem, but its speed in executing various assertions and handling huge test cases is much more relevant.
However, in the end, it will be your own slow code that will most likely be slowing down the test suite. Not the test library.