Integrate JMH (Java Microbenchmarking Harness) with Spring (Boot) and make developing and running benchmarks as easy and convinent as writing tests.
Wrap the necessary JMH boilerplate code within JUnit to benefit from all the existing test infrastructure Spring (Boot) provides. It should be as easy and convinent to write benchmarks as it is to write tests.
<dependency>
<groupId>org.openjdk.jmh</groupId>
<artifactId>jmh-core</artifactId>
<version>${jmh.version}</version>
</dependency>
<dependency>
<groupId>org.openjdk.jmh</groupId>
<artifactId>jmh-generator-annprocess</artifactId>
<version>${jmh.version}</version>
<scope>provided</scope>
</dependency>
If you also use mapstruct, make sure it does not disable annotation processor auto-discovery within maven-compiler-plugin!
package com.github.msievers.benchmark;
import org.junit.Test;
import org.openjdk.jmh.results.format.ResultFormatType;
import org.openjdk.jmh.runner.Runner;
import org.openjdk.jmh.runner.RunnerException;
import org.openjdk.jmh.runner.options.Options;
import org.openjdk.jmh.runner.options.OptionsBuilder;
abstract public class AbstractBenchmark {
private final static Integer MEASUREMENT_ITERATIONS = 3;
private final static Integer WARMUP_ITERATIONS = 3;
@Test
public void executeJmhRunner() throws RunnerException {
Options opt = new OptionsBuilder()
// set the class name regex for benchmarks to search for to the current class
.include("\\." + this.getClass().getSimpleName() + "\\.")
.warmupIterations(WARMUP_ITERATIONS)
.measurementIterations(MEASUREMENT_ITERATIONS)
// do not use forking or the benchmark methods will not see references stored within its class
.forks(0)
// do not use multiple threads
.threads(1)
.shouldDoGC(true)
.shouldFailOnError(true)
.resultFormat(ResultFormatType.JSON)
.result("/dev/null") // set this to a valid filename if you want reports
.shouldFailOnError(true)
.jvmArgs("-server")
.build();
new Runner(opt).run();
}
}
package com.github.msievers.benchmark;
import org.jooq.DSLContext;
import org.junit.runner.RunWith;
import org.openjdk.jmh.annotations.*;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit4.SpringRunner;
import java.util.concurrent.TimeUnit;
@SpringBootTest
@State(Scope.Benchmark)
@BenchmarkMode(Mode.AverageTime)
@OutputTimeUnit(TimeUnit.MICROSECONDS)
@RunWith(SpringRunner.class)
public class SomeBenchmark extends AbstractBenchmark {
/**
* The most important thing is to get Spring (autowired) components into the executing
* benchmark instance. To accomplish this you can do the following
*
* * set `forks` to 0 for the JMH runner to run the benchmarks within the same VM
* * store all @Autowired dependencies into static fields of the surrounding class
*
*/
private static DSLContext dsl;
/**
* We use setter autowiring to make Spring save an instance of `DSLContext` into a
* static field accessible be the benchmarks spawned through the JMH runner.
*
* @param dslContext
*/
@Autowired
void setDslContext(DSLContext dslContext) {
SomeBenchmark.dsl = dslContext;
}
// this variable is set during the benchmarks setup and is accessible by the benchmark method
Record someRecord;
/*
* There is no @Test annotated method within here, because the AbstractBenchmark
* defines one, which spawns the JMH runner. This class only contains JMH/Benchmark
* related code.
*/
@Setup(Level.Trial)
public void setupBenchmark() {
// Seed the database with some (1000) random records
new RecordsSeeder(dsl, 1000).seedRecords();
// fetch one of these for the upcoming benchmarks to query for certain fields
someRecord = dsl.selectFrom(RecordTable.RECORD_TABLE).fetchAny();
// disable jooqs execution logging or it will spam STDOUT *AND* slow down the whole benchmark
dsl.configuration().settings().setExecuteLogging(false);
}
@Benchmark
public void someBenchmarkMethod() {
// query the database
dsl.selectFrom(RecordTable.RECORD_TABLE).where(RecordTable.RECORD_TABLE.SOME_ID.eq(someRecord.getSomeId())).fetch();
}
}
# Warmup Iteration 1: 151.791 us/op
# Warmup Iteration 2: 104.936 us/op
# Warmup Iteration 3: 104.679 us/op
Iteration 1: 121.414 us/op
Iteration 2: 104.400 us/op
Iteration 3: 102.117 us/op
Result "com.github.msievers.benchmark.SomeBenchmark.someBenchmarkMethod":
109.310 ±(99.9%) 192.370 us/op [Average]
(min, avg, max) = (102.117, 109.310, 121.414), stdev = 10.544
CI (99.9%): [≈ 0, 301.680] (assumes normal distribution)
# Run complete. Total time: 00:01:07
REMEMBER: The numbers below are just data. To gain reusable insights, you need to follow up on
why the numbers are the way they are. Use profilers (see -prof, -lprof), design factorial
experiments, perform baseline and negative tests that provide experimental control, make sure
the benchmarking environment is safe on JVM/OS/HW level, ask for reviews from the domain experts.
Do not assume the numbers tell you what you want them to tell.
Benchmark Mode Cnt Score Error Units
SomeBenchmark.someBenchmarkMethod avgt 3 109.310 ± 192.370 us/op
JMH provides its own default paradigm of writing benchmarks by creating fat jars which are than run and execute all the benchmarks included. This approach proves to be no good fit when it comes to developer experience within a typical Spring app. Developers are used to write tests and Spring provides a ton of support for this. So the idea is to reuse Springs test infrastructure for benchmarks.
Let's assume a simple JUnit @SpringBootTest
@SpringBootTest
@RunWith(SpringRunner.class)
class SomeTest {
}
Lets add JMH to the game.
@SpringBootTest
@RunWith(SpringRunner.class)
@State(Scope.Benchmark)
@BenchmarkMode(Mode.AverageTime)
@OutputTimeUnit(TimeUnit.MICROSECONDS)
class SomeTest {
@Benchmark
public void someBenchmarkMethod() {
// ...
}
}
JMH provides an option to kick off the benchmark runner programatically by giving it the classes to search for @Benchmark
annotated methods and a couple of other parameters. For this to happen let's
- use/hijack a
@Test
annotated method for the runner to start - give the current class name to the runner so it's picked up (exclusively)
@SpringBootTest
@RunWith(SpringRunner.class)
@State(Scope.Benchmark)
@BenchmarkMode(Mode.AverageTime)
@OutputTimeUnit(TimeUnit.MICROSECONDS)
class SomeTest {
@Test
public void executeJmhRunner() {
Options jmhRunnerOptions = new OptionsBuilder()
// set the class name regex for benchmarks to search for to the current class
.include("\\." + this.getClass().getSimpleName() + "\\.")
.warmupIterations(3)
.measurementIterations(3)
// do not use forking or the benchmark methods will not see references stored within its class
.forks(0)
// do not use multiple threads
.threads(1)
.shouldDoGC(true)
.shouldFailOnError(true)
.resultFormat(ResultFormatType.JSON)
.result("/dev/null") // set this to a valid filename if you want reports
.shouldFailOnError(true)
.jvmArgs("-server")
.build();
new Runner(jmhRunnerOptions).run();
}
@Benchmark
public void someBenchmarkMethod() {
// ...
}
}
This works, but the JMH runner boilerplate can be outsourced to avoid the need to duplicate it in every benchmark.
AbstractBenchmark.java
abstract public class AbstractBenchmark {
private final static Integer MEASUREMENT_ITERATIONS = 3;
private final static Integer WARMUP_ITERATIONS = 3;
/**
* Any benchmark, by extending this class, inherits this single @Test method for JUnit to run.
*/
@Test
public void executeJmhRunner() throws RunnerException {
Options jmhRunnerOptions = new OptionsBuilder()
// set the class name regex for benchmarks to search for to the current class
.include("\\." + this.getClass().getSimpleName() + "\\.")
.warmupIterations(WARMUP_ITERATIONS)
.measurementIterations(MEASUREMENT_ITERATIONS)
// do not use forking or the benchmark methods will not see references stored within its class
.forks(0)
// do not use multiple threads
.threads(1)
.shouldDoGC(true)
.shouldFailOnError(true)
.resultFormat(ResultFormatType.JSON)
.result("/dev/null") // set this to a valid filename if you want reports
.shouldFailOnError(true)
.jvmArgs("-server")
.build();
new Runner(jmhRunnerOptions).run();
}
}
SomeBenchmark.java
@SpringBootTest
@RunWith(SpringRunner.class)
@State(Scope.Benchmark)
@BenchmarkMode(Mode.AverageTime)
@OutputTimeUnit(TimeUnit.MICROSECONDS)
class SomeTest extends AbstractBenchmark {
@Benchmark
public void someBenchmarkMethod() {
// ...
}
}
Great, but there is one major drawback so far. You cannot access @Autowired
fields from within the @Benchmark
annotated method. This is because how the JMH runner runs the benchmarks. Internally, it spawns new Tasks/Threads which leads to loosing the @Autowired
field values within the benchmarked method.
One solution for this is, to
- store the
@Autowired
fields intostatic
fields (e.g. by setter autowiring) AND - disable forking benchmark execution for the JMH runner (see AbstractBenchmark.java
.forks(0)
option)
@SpringBootTest
@RunWith(SpringRunner.class)
@State(Scope.Benchmark)
@BenchmarkMode(Mode.AverageTime)
@OutputTimeUnit(TimeUnit.MICROSECONDS)
class SomeTest extends AbstractBenchmark {
private static DSLContext dsl;
@Autowired
void setDslContext(DSLContext dslContext) {
SomeBenchmark.dsl = dslContext;
}
@Benchmark
public void someBenchmarkMethod() {
// dsl is present, because it is a static field
assert(dsl != null);
// ...
}
}
Et voila, there is a class SomeBenchmark
extending AbstractBenchmark
which can be run out of IntelliJ like any other unit test (just run the whole class, not any of the @Benchmark
methods), with all the nice setup provided by @SpringBootTest
which will give you some nice report about the performance of your benchmarked code.
# Warmup Iteration 1: 151.791 us/op
# Warmup Iteration 2: 104.936 us/op
# Warmup Iteration 3: 104.679 us/op
Iteration 1: 121.414 us/op
Iteration 2: 104.400 us/op
Iteration 3: 102.117 us/op
Result "com.github.msievers.benchmark.SomeBenchmark.someBenchmarkMethod":
109.310 ±(99.9%) 192.370 us/op [Average]
(min, avg, max) = (102.117, 109.310, 121.414), stdev = 10.544
CI (99.9%): [≈ 0, 301.680] (assumes normal distribution)
# Run complete. Total time: 00:01:07
REMEMBER: The numbers below are just data. To gain reusable insights, you need to follow up on
why the numbers are the way they are. Use profilers (see -prof, -lprof), design factorial
experiments, perform baseline and negative tests that provide experimental control, make sure
the benchmarking environment is safe on JVM/OS/HW level, ask for reviews from the domain experts.
Do not assume the numbers tell you what you want them to tell.
Benchmark Mode Cnt Score Error Units
SomeBenchmark.someBenchmarkMethod avgt 3 109.310 ± 192.370 us/op
I'm trying to follow this awesome guide which is the best resource I've found so far on Spring and JMH. I've done all these steps; but it doesn't seem to be running the
executeJmhRunner
test. It's just ignored. I am getting issues around overlapping classes from shade, and wondering if that could stop it trying to run it at all? All my other tests run just fine; only this one is ignored.Any advice you can give is really appreciated.