Edit 2023-01-12: See comments! Test is flawed - it will never add more than 7 elements to the HashMap
. I've left the original results here unedited.
Test results on an Intel(R) Core(TM) i7-2630QM CPU @ 2.00GHz
CPU:
$ cargo bench
Compiling bench_test v0.1.0 (file:///home/daboross/bench_test)
Finished release [optimized] target(s) in 2.99 secs
Running target/release/deps/bench_test-6b454ea06156805a
running 6 tests
test tests::bench_10_item_hash_map ... bench: 35 ns/iter (+/- 4)
test tests::bench_10_item_vec ... bench: 29 ns/iter (+/- 2)
test tests::bench_20_item_hash_map ... bench: 33 ns/iter (+/- 1)
test tests::bench_20_item_vec ... bench: 51 ns/iter (+/- 6)
test tests::bench_50_item_hash_map ... bench: 35 ns/iter (+/- 2)
test tests::bench_50_item_vec ... bench: 129 ns/iter (+/- 1)
test result: ok. 0 passed; 0 failed; 0 ignored; 6 measured
Conclusion: Vec is best when there are 15 or fewer items, HashMap is better when there are more than 15.
I just found this from google trying to answer a similar question, and I think this is not comparing what you think it is comparing. The way you are constructing the hashmaps they will never have more than 7 elements because keys in HashMaps are unique. Using
iter().cycle()
means you are overwriting the values for those keys continuously without increasing the actual capacity of the HashMap. you can see this with:Which fails with:
A better
create_data
function that draws key values from random might look something like thisIn order to test lookups you now need to insert a known element after creating the object during each test as there are no pre-determined values.
However, even with these changes the performance of hashmaps doesn't change much because Hashmaps are generally optimized for lookups.