Benchmark building a Ruby hash: #each - #each_with_object - #reduce - Hash[map] - #map.zip(map).to_h - #reduce-merge
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
require 'benchmark/ips' | |
Benchmark.ips do |x| | |
Property = Struct.new(:name, :original_name) | |
PROPERTIES = [ | |
Property.new("Clo", "Chloe" ), | |
Property.new("Jon", "Jonathan" ), | |
Property.new("Kris", "Kristin" ), | |
Property.new("Dave", "David" ), | |
Property.new("Ana", "Anastasia" ), | |
Property.new("Mike", "Michael" ), | |
Property.new("Becky", "Rebecca" ), | |
Property.new("Will", "William" ) | |
] | |
x.report('#each') do |times| | |
i = 0 | |
while i < times | |
out = {} | |
PROPERTIES.each do |property| | |
out[property.name] = property.original_name | |
end | |
out | |
i += 1 | |
end | |
end | |
x.report('#each_with_object') do |times| | |
i = 0 | |
while i < times | |
PROPERTIES.each_with_object({}) do |property, memo| | |
memo[property.name] = property.original_name | |
end | |
i += 1 | |
end | |
end | |
x.report('#reduce') do |times| | |
i = 0 | |
while i < times | |
PROPERTIES.reduce({}) do |memo, property| | |
memo[property.name] = property.original_name | |
memo | |
end | |
i += 1 | |
end | |
end | |
x.report('Hash[map]') do |times| | |
i = 0 | |
while i < times | |
Hash[PROPERTIES.map { |property| [property.name, property.original_name] }] | |
i += 1 | |
end | |
end | |
x.report('#map.zip(map).to_h') do |times| | |
i = 0 | |
while i < times | |
PROPERTIES.map(&:name).zip(PROPERTIES.map(&:original_name)).to_h | |
i += 1 | |
end | |
end | |
x.report('#reduce-merge') do |times| | |
i = 0 | |
while i < times | |
PROPERTIES.reduce({}) { |memo, property| memo.merge(property.name => property.original_name) } | |
i += 1 | |
end | |
end | |
x.compare! | |
end |
Surprised to see that "Hash[map]"
is by far the fastest on JRuby 9k.
Calculating -------------------------------------
#each 20.365k i/100ms
#each_with_object 19.043k i/100ms
#reduce 16.627k i/100ms
Hash[map] 22.174k i/100ms
#map.zip(map).to_h 14.661k i/100ms
#reduce-merge 7.936k i/100ms
-------------------------------------------------
#each 302.582k (± 1.5%) i/s - 1.527M
#each_with_object 288.721k (± 2.5%) i/s - 1.447M
#reduce 236.231k (± 3.5%) i/s - 1.181M
Hash[map] 352.435k (± 2.7%) i/s - 1.774M
#map.zip(map).to_h 215.814k (± 2.4%) i/s - 1.085M
#reduce-merge 98.931k (± 5.2%) i/s - 499.968k
Comparison:
Hash[map]: 352435.2 i/s
#each: 302582.0 i/s - 1.16x slower
#each_with_object: 288721.1 i/s - 1.22x slower
#reduce: 236231.1 i/s - 1.49x slower
#map.zip(map).to_h: 215814.3 i/s - 1.63x slower
#reduce-merge: 98931.2 i/s - 3.56x slower
irb(main):153:0> JRUBY_VERSION
=> "9.0.4.0"
irb(main):154:0> ^D
$ java -version
openjdk version "1.8.0_66-internal"
OpenJDK Runtime Environment (build 1.8.0_66-internal-b17)
OpenJDK 64-Bit Server VM (build 25.66-b17, mixed mode)
Thanks for the JRuby version! Interesting! That's cool for Hash[map]
because it's my favorite version along with the zip
FP-style pipeline. Also interesting that in JRuby reduce
seems slower than each_with_object
, whereas in MRI they are close and in MRI 2.3.0 they seem to be exactly the same.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
It’s an old-ish (bought in August 2012) Asus Zenbook with i7-3517U CPU @ 1.90GHz and 10 GB of RAM.