Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?

Benchmark Timing Oddities

Anecdotally, I had noticed that from run to run, the VectorTile benchmarks weren't returning consistent numbers. My three avenues of suspicion were:

  1. My VT implementation
  2. Scaliper
  3. The JVM

(1) was out, because no IO was performed during timed sections. In exploring how running successive benchmarks on the JVM affected timings, I found the following:

VectorTiles

All times here are in microseconds.

Same JVM

| Alg | T1 | T2 | T3 | T4 | T5 | Mean | σ | σ % of Mean | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | To Tile | 360 | 336 | 347 | 355 | 347 | 349.0 | 8.2 | 2.3% | | To VectorTile | 21073 | 20660 | 20303 | 19986 | 20344 | 20473.2 | 368.2 | 1.8% | | From VectorTile | 6910 | 6753 | 6684 | 6950 | 6784 | 6816.2 | 99.2 | 1.5% | | Polygon | 467 | 475 | 427 | 405 | 428 | 440.4 | 26.4 | 6% |

Fresh JVM

The operations performed were:

./sbt -211
project geotrellis-benchmark
clean
testOnly benchmark.geotrellis.vectortile.VectorTileBench

% Slower indicates how much slower on average each algorithm ran, compared to Same JVM times.

Alg T1 T2 T3 T4 T5 Mean σ σ % of Mean % Slower
To Tile 356 348 342 357 347 350.0 5.7 1.6% 0%
To VectorTile 28040 21095 23430 23265 20967 23359.4 2560.2 11% 14.1%
From VectorTile 8058 7251 7995 7332 7511 7629.4 335.6 4.4% 11.9%
Polygon 452 442 578 427 520 483.8 56.9 11.8% 9.9%

Convolution

All times here are in milliseconds.

Same JVM

| Alg | T1 | T2 | T3 | T4 | T5 | Mean | σ | σ % of Mean | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | Convolve SBN Tile | 2056 | 1992 | 2018 | 2048 | 2006 | 2024 | 24.43 | 1.2% | Convolve Byte Raster | 2199 | 2074 | 2113 | 2095 | 2084 | 2113 | 44.9 | 2.13%

Fresh JVM

Alg T1 T2 T3 T4 T5 Mean σ σ % of Mean % Slower
Convolve SBN Tile 2113 2068 2166 2212 2211 2154 56.27 2.6% 6.4%
Convolve Byte Raster 2283 2108 2265 2377 2179 2242 92 4.1% 6.1%

Rendering

All times here are in milliseconds.

Same JVM

| Alg | T1 | T2 | T3 | T4 | T5 | Mean | σ | σ % of Mean | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | Rendering | 178.91 | 172.64 | 173.34 | 171.64 | 171.26 | 173.56 | 2.77 | 1.6% | PNG Encoding | 13.87 | 12.78 | 12.83 | 12.76 | 12.85 | 13.01 | 0.43 | 3.3% | Both | 191.74 | 185.55 | 185.37 | 184.62 | 184.59 | 186.37 | 2.71 | 1.45%

Fresh JVM

Alg T1 T2 T3 T4 T5 Mean σ σ % of Mean % Slower
Rendering 176.56 189.57 186.70 180.77 191.77 185.07 5.63 3% 6.6%
PNG Encoding 12.97 12.95 13.09 13.13 14.02 13.23 0.4 3% 1.7%
Both 197.47 189.10 198.25 202.40 193.04 196.05 4.57 2.3% 5.2%

Observations

Benchmarks ran from a fresh JVM ran slower on average. This can be attributed to the results of the JIT optimization being wiped every time.

Benchmarks from a fresh JVM have more sporatic distributions. A bit counterintuitive. You'd think that the Fresh JVM benchmarks would all run equally slower than their JIT'd counterparts.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.