Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Star 52 You must be signed in to star a gist
  • Fork 17 You must be signed in to fork a gist
  • Save msievers/ce80d343fc15c44bea6cbb741dde7e45 to your computer and use it in GitHub Desktop.
Save msievers/ce80d343fc15c44bea6cbb741dde7e45 to your computer and use it in GitHub Desktop.
[Spring, JMH] Howto integrate JMH benchmarks with Spring

Motivation

Integrate JMH (Java Microbenchmarking Harness) with Spring (Boot) and make developing and running benchmarks as easy and convinent as writing tests.

Idea

Wrap the necessary JMH boilerplate code within JUnit to benefit from all the existing test infrastructure Spring (Boot) provides. It should be as easy and convinent to write benchmarks as it is to write tests.

TL;DR;

pom.xml

<dependency>
    <groupId>org.openjdk.jmh</groupId>
    <artifactId>jmh-core</artifactId>
    <version>${jmh.version}</version>
</dependency>
<dependency>
    <groupId>org.openjdk.jmh</groupId>
    <artifactId>jmh-generator-annprocess</artifactId>
    <version>${jmh.version}</version>
    <scope>provided</scope>
</dependency>

If you also use mapstruct, make sure it does not disable annotation processor auto-discovery within maven-compiler-plugin!

test/java/com/github/msievers/benchmark/AbstractBenchmark.java

package com.github.msievers.benchmark;

import org.junit.Test;
import org.openjdk.jmh.results.format.ResultFormatType;
import org.openjdk.jmh.runner.Runner;
import org.openjdk.jmh.runner.RunnerException;
import org.openjdk.jmh.runner.options.Options;
import org.openjdk.jmh.runner.options.OptionsBuilder;

abstract public class AbstractBenchmark {

    private final static Integer MEASUREMENT_ITERATIONS = 3;
    private final static Integer WARMUP_ITERATIONS = 3;

    @Test
    public void executeJmhRunner() throws RunnerException {
        Options opt = new OptionsBuilder()
            // set the class name regex for benchmarks to search for to the current class 
            .include("\\." + this.getClass().getSimpleName() + "\\.")
            .warmupIterations(WARMUP_ITERATIONS)
            .measurementIterations(MEASUREMENT_ITERATIONS)
            // do not use forking or the benchmark methods will not see references stored within its class
            .forks(0)
            // do not use multiple threads
            .threads(1)
            .shouldDoGC(true)
            .shouldFailOnError(true)
            .resultFormat(ResultFormatType.JSON)
            .result("/dev/null") // set this to a valid filename if you want reports
            .shouldFailOnError(true)
            .jvmArgs("-server")
            .build();

        new Runner(opt).run();
    }
}

test/java/com/github/msievers/benchmark/SomeBenchmark.java

package com.github.msievers.benchmark;

import org.jooq.DSLContext;
import org.junit.runner.RunWith;
import org.openjdk.jmh.annotations.*;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit4.SpringRunner;

import java.util.concurrent.TimeUnit;

@SpringBootTest
@State(Scope.Benchmark)
@BenchmarkMode(Mode.AverageTime)
@OutputTimeUnit(TimeUnit.MICROSECONDS)
@RunWith(SpringRunner.class)
public class SomeBenchmark extends AbstractBenchmark {

    /**
     * The most important thing is to get Spring (autowired) components into the executing
     * benchmark instance. To accomplish this you can do the following
     * 
     *  * set `forks` to 0 for the JMH runner to run the benchmarks within the same VM
     *  * store all @Autowired dependencies into static fields of the surrounding class
     * 
     */
    private static DSLContext dsl;

    /**
     * We use setter autowiring to make Spring save an instance of `DSLContext` into a
     * static field accessible be the benchmarks spawned through the JMH runner.
     *
     * @param dslContext
     */
    @Autowired
    void setDslContext(DSLContext dslContext) {
        SomeBenchmark.dsl = dslContext;
    }

    // this variable is set during the benchmarks setup and is accessible by the benchmark method
    Record someRecord;

    /*
     * There is no @Test annotated method within here, because the AbstractBenchmark
     * defines one, which spawns the JMH runner. This class only contains JMH/Benchmark
     * related code.
     */

    @Setup(Level.Trial)
    public void setupBenchmark() {
        // Seed the database with some (1000) random records
        new RecordsSeeder(dsl, 1000).seedRecords();
        
        // fetch one of these for the upcoming benchmarks to query for certain fields
        someRecord = dsl.selectFrom(RecordTable.RECORD_TABLE).fetchAny();

        // disable jooqs execution logging or it will spam STDOUT *AND* slow down the whole benchmark
        dsl.configuration().settings().setExecuteLogging(false);
    }

    @Benchmark
    public void someBenchmarkMethod() {
        // query the database
        dsl.selectFrom(RecordTable.RECORD_TABLE).where(RecordTable.RECORD_TABLE.SOME_ID.eq(someRecord.getSomeId())).fetch();
    }
}

Report

# Warmup Iteration   1: 151.791 us/op
# Warmup Iteration   2: 104.936 us/op
# Warmup Iteration   3: 104.679 us/op
Iteration   1: 121.414 us/op
Iteration   2: 104.400 us/op
Iteration   3: 102.117 us/op

Result "com.github.msievers.benchmark.SomeBenchmark.someBenchmarkMethod":
  109.310 ±(99.9%) 192.370 us/op [Average]
  (min, avg, max) = (102.117, 109.310, 121.414), stdev = 10.544
  CI (99.9%): [≈ 0, 301.680] (assumes normal distribution)


# Run complete. Total time: 00:01:07

REMEMBER: The numbers below are just data. To gain reusable insights, you need to follow up on
why the numbers are the way they are. Use profilers (see -prof, -lprof), design factorial
experiments, perform baseline and negative tests that provide experimental control, make sure
the benchmarking environment is safe on JVM/OS/HW level, ask for reviews from the domain experts.
Do not assume the numbers tell you what you want them to tell.

Benchmark                          Mode  Cnt    Score     Error  Units
SomeBenchmark.someBenchmarkMethod  avgt    3  109.310 ± 192.370  us/op

Details

JMH provides its own default paradigm of writing benchmarks by creating fat jars which are than run and execute all the benchmarks included. This approach proves to be no good fit when it comes to developer experience within a typical Spring app. Developers are used to write tests and Spring provides a ton of support for this. So the idea is to reuse Springs test infrastructure for benchmarks.

Let's assume a simple JUnit @SpringBootTest

@SpringBootTest
@RunWith(SpringRunner.class)
class SomeTest {
}

Lets add JMH to the game.

@SpringBootTest
@RunWith(SpringRunner.class)
@State(Scope.Benchmark)
@BenchmarkMode(Mode.AverageTime)
@OutputTimeUnit(TimeUnit.MICROSECONDS)
class SomeTest {

    @Benchmark
    public void someBenchmarkMethod() {
        // ...
    }
}

JMH provides an option to kick off the benchmark runner programatically by giving it the classes to search for @Benchmark annotated methods and a couple of other parameters. For this to happen let's

  • use/hijack a @Test annotated method for the runner to start
  • give the current class name to the runner so it's picked up (exclusively)
@SpringBootTest
@RunWith(SpringRunner.class)
@State(Scope.Benchmark)
@BenchmarkMode(Mode.AverageTime)
@OutputTimeUnit(TimeUnit.MICROSECONDS)
class SomeTest {

    @Test
    public void executeJmhRunner() {
        Options jmhRunnerOptions = new OptionsBuilder()
            // set the class name regex for benchmarks to search for to the current class
            .include("\\." + this.getClass().getSimpleName() + "\\.")
            .warmupIterations(3)
            .measurementIterations(3)
            // do not use forking or the benchmark methods will not see references stored within its class
            .forks(0)
            // do not use multiple threads
            .threads(1)
            .shouldDoGC(true)
            .shouldFailOnError(true)
            .resultFormat(ResultFormatType.JSON)
            .result("/dev/null") // set this to a valid filename if you want reports
            .shouldFailOnError(true)
            .jvmArgs("-server")
            .build();
            
        new Runner(jmhRunnerOptions).run();
    }
    
    @Benchmark
    public void someBenchmarkMethod() {
        // ...
    }
}

This works, but the JMH runner boilerplate can be outsourced to avoid the need to duplicate it in every benchmark.

AbstractBenchmark.java

abstract public class AbstractBenchmark {

    private final static Integer MEASUREMENT_ITERATIONS = 3;
    private final static Integer WARMUP_ITERATIONS = 3;

    /**
     * Any benchmark, by extending this class, inherits this single @Test method for JUnit to run.
     */
    @Test
    public void executeJmhRunner() throws RunnerException {
        Options jmhRunnerOptions = new OptionsBuilder()
            // set the class name regex for benchmarks to search for to the current class
            .include("\\." + this.getClass().getSimpleName() + "\\.")
            .warmupIterations(WARMUP_ITERATIONS)
            .measurementIterations(MEASUREMENT_ITERATIONS)
            // do not use forking or the benchmark methods will not see references stored within its class
            .forks(0)
            // do not use multiple threads
            .threads(1)
            .shouldDoGC(true)
            .shouldFailOnError(true)
            .resultFormat(ResultFormatType.JSON)
            .result("/dev/null") // set this to a valid filename if you want reports
            .shouldFailOnError(true)
            .jvmArgs("-server")
            .build();

        new Runner(jmhRunnerOptions).run();
    }
}

SomeBenchmark.java

@SpringBootTest
@RunWith(SpringRunner.class)
@State(Scope.Benchmark)
@BenchmarkMode(Mode.AverageTime)
@OutputTimeUnit(TimeUnit.MICROSECONDS)
class SomeTest extends AbstractBenchmark {

    @Benchmark
    public void someBenchmarkMethod() {
        // ...
    }
}

Great, but there is one major drawback so far. You cannot access @Autowired fields from within the @Benchmark annotated method. This is because how the JMH runner runs the benchmarks. Internally, it spawns new Tasks/Threads which leads to loosing the @Autowired field values within the benchmarked method.

One solution for this is, to

  • store the @Autowired fields into static fields (e.g. by setter autowiring) AND
  • disable forking benchmark execution for the JMH runner (see AbstractBenchmark.java .forks(0) option)
@SpringBootTest
@RunWith(SpringRunner.class)
@State(Scope.Benchmark)
@BenchmarkMode(Mode.AverageTime)
@OutputTimeUnit(TimeUnit.MICROSECONDS)
class SomeTest extends AbstractBenchmark {

    private static DSLContext dsl;

    @Autowired
    void setDslContext(DSLContext dslContext) {
        SomeBenchmark.dsl = dslContext;
    }

    @Benchmark
    public void someBenchmarkMethod() {

        // dsl is present, because it is a static field
        assert(dsl != null);

        // ...
    }
}

Et voila, there is a class SomeBenchmark extending AbstractBenchmark which can be run out of IntelliJ like any other unit test (just run the whole class, not any of the @Benchmark methods), with all the nice setup provided by @SpringBootTest which will give you some nice report about the performance of your benchmarked code.

# Warmup Iteration   1: 151.791 us/op
# Warmup Iteration   2: 104.936 us/op
# Warmup Iteration   3: 104.679 us/op
Iteration   1: 121.414 us/op
Iteration   2: 104.400 us/op
Iteration   3: 102.117 us/op

Result "com.github.msievers.benchmark.SomeBenchmark.someBenchmarkMethod":
  109.310 ±(99.9%) 192.370 us/op [Average]
  (min, avg, max) = (102.117, 109.310, 121.414), stdev = 10.544
  CI (99.9%): [≈ 0, 301.680] (assumes normal distribution)


# Run complete. Total time: 00:01:07

REMEMBER: The numbers below are just data. To gain reusable insights, you need to follow up on
why the numbers are the way they are. Use profilers (see -prof, -lprof), design factorial
experiments, perform baseline and negative tests that provide experimental control, make sure
the benchmarking environment is safe on JVM/OS/HW level, ask for reviews from the domain experts.
Do not assume the numbers tell you what you want them to tell.

Benchmark                          Mode  Cnt    Score     Error  Units
SomeBenchmark.someBenchmarkMethod  avgt    3  109.310 ± 192.370  us/op

Resources (JMH)

Resources (Why mapstruct may disable annotation processor autodiscovery)

@shederman
Copy link

shederman commented Apr 28, 2019

I'm trying to follow this awesome guide which is the best resource I've found so far on Spring and JMH. I've done all these steps; but it doesn't seem to be running the executeJmhRunner test. It's just ignored. I am getting issues around overlapping classes from shade, and wondering if that could stop it trying to run it at all? All my other tests run just fine; only this one is ignored.

Any advice you can give is really appreciated.

[INFO] Including org.openjdk.jmh:jmh-generator-annprocess:jar:1.19 in the shaded jar.
[WARNING] jaxb-api-2.3.0.jar, log4j-api-2.10.0.jar define 1 overlapping classes: 
[WARNING]   - module-info
[WARNING] jcl-over-slf4j-1.7.25.jar, commons-logging-1.1.3.jar define 2 overlapping classes: 
[WARNING]   - org.apache.commons.logging.impl.SimpleLog$1
[WARNING]   - org.apache.commons.logging.LogConfigurationException
[WARNING] javax.ws.rs-api-2.1.jar, jersey-core-1.13.jar define 54 overlapping classes: 
[WARNING]   - javax.ws.rs.core.HttpHeaders
[WARNING]   - javax.ws.rs.ext.RuntimeDelegate$HeaderDelegate
[WARNING]   - javax.ws.rs.DefaultValue
[WARNING]   - javax.ws.rs.core.StreamingOutput
[WARNING]   - javax.ws.rs.HEAD
[WARNING]   - javax.ws.rs.core.Request
[WARNING]   - javax.ws.rs.ext.Providers
[WARNING]   - javax.ws.rs.core.NewCookie
[WARNING]   - javax.ws.rs.core.UriBuilderException
[WARNING]   - javax.ws.rs.ext.ContextResolver
[WARNING]   - 44 more...
[WARNING] spring-jcl-5.0.9.RELEASE.jar, commons-logging-1.1.3.jar define 2 overlapping classes: 
[WARNING]   - org.apache.commons.logging.LogFactory$2
[WARNING]   - org.apache.commons.logging.LogFactory$1
[WARNING] spring-jcl-5.0.9.RELEASE.jar, jcl-over-slf4j-1.7.25.jar, commons-logging-1.1.3.jar define 4 overlapping classes: 
[WARNING]   - org.apache.commons.logging.Log
[WARNING]   - org.apache.commons.logging.impl.SimpleLog
[WARNING]   - org.apache.commons.logging.impl.NoOpLog
[WARNING]   - org.apache.commons.logging.LogFactory
[WARNING] tomcat-embed-core-8.5.34.jar, javax.servlet-api-3.1.0.jar define 79 overlapping classes: 
[WARNING]   - javax.servlet.http.Cookie
[WARNING]   - javax.servlet.ServletContext
[WARNING]   - javax.servlet.Registration
[WARNING]   - javax.servlet.http.HttpSessionListener
[WARNING]   - javax.servlet.http.HttpSessionContext
[WARNING]   - javax.servlet.FilterChain
[WARNING]   - javax.servlet.http.WebConnection
[WARNING]   - javax.servlet.http.HttpServletRequestWrapper
[WARNING]   - javax.servlet.http.HttpSessionAttributeListener
[WARNING]   - javax.servlet.annotation.HandlesTypes
[WARNING]   - 69 more...
[WARNING] maven-shade-plugin has detected that some class files are
[WARNING] present in two or more JARs. When this happens, only one
[WARNING] single version of the class is copied to the uber jar.
[WARNING] Usually this is not harmful and you can skip these warnings,
[WARNING] otherwise try to manually exclude artifacts based on
[WARNING] mvn dependency:tree -Ddetail=true and the above output.
[WARNING] See http://maven.apache.org/plugins/maven-shade-plugin/

@Schreigurke
Copy link

Thanks for this guide!!!

@uo265563
Copy link

Hi, I think i have followed the steps correctly but I keep getting this error:
Captura de pantalla 2022-05-23 084749
Do you know what causes that?

@msievers
Copy link
Author

Sorry, @uo265563, currently I have very little time so I'm afraid I cannot help you out with this one.

@marksilcox
Copy link

@uo265563 adding testAnnotationProcessor 'org.openjdk.jmh:jmh-generator-annprocess:1.35' to the gradle dependnecies fixed this for me

@vedrancu
Copy link

Thanks for guide and @marksilcox for fix help.

Is possible to use JMH gradle plugin and run Jmh while using benefit of Junit and mockito in the same time?

@Berkeaerospike
Copy link

Thanks for the guide.
While running JMH and Spring with autowire I could not have forks > 0.
Instead I did:

@Setup
public void setup() {
    this.context = new SpringApplication(SimpleDemoApplication.class).run();
    userService = this.context.getBean(UserService.class); //UserService
}  

This way I wire any service I need.
Now I can set as many forks as I want.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment