Skip to content

Instantly share code, notes, and snippets.

@invasionofsmallcubes
Last active June 16, 2016 11:32
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save invasionofsmallcubes/b91369fa47c3916a99b82143c84e185f to your computer and use it in GitHub Desktop.
Save invasionofsmallcubes/b91369fa47c3916a99b82143c84e185f to your computer and use it in GitHub Desktop.
Going reactive, one day...

Here's what's in my head: having everything reactive is actually an utopia.

Everyone of us need to talk to external providers (or just suppose we don't have a reactive driver for mysql/mongodb whatevah).

So I said: I want to achieve the minimum goal which is having the same number of request per second. I tried with just CompletableFuture because I know we have AsyncRestTemplate that we can use with DeferredResult.

Given the same number of threads for:

  • ExecutorService on the nonblocking example
  • server.tomcat.max-threads=500 on the blocking example

With the same Gatling tests I got this results:

NORMAL

================================================================================
---- Global Information --------------------------------------------------------
> request count                                        500 (OK=500    KO=0     )
> min response time                                   6574 (OK=6574   KO=-     )
> max response time                                  14832 (OK=14832  KO=-     )
> mean response time                                  9861 (OK=9861   KO=-     )
> std deviation                                       2694 (OK=2694   KO=-     )
> response time 50th percentile                      10058 (OK=10058  KO=-     )
> response time 75th percentile                      10272 (OK=10272  KO=-     )
> response time 95th percentile                      14615 (OK=14615  KO=-     )
> response time 99th percentile                      14650 (OK=14650  KO=-     )
> mean requests/sec                                  31.25 (OK=31.25  KO=-     )
---- Response Time Distribution ------------------------------------------------
> t < 800 ms                                             0 (  0%)
> 800 ms < t < 1200 ms                                   0 (  0%)
> t > 1200 ms                                          500 (100%)
> failed                                                 0 (  0%)
================================================================================

COMPLETABLE FUTURE

================================================================================
---- Global Information --------------------------------------------------------
> request count                                        500 (OK=500    KO=0     )
> min response time                                   7016 (OK=7016   KO=-     )
> max response time                                  15416 (OK=15416  KO=-     )
> mean response time                                 10438 (OK=10438  KO=-     )
> std deviation                                       2772 (OK=2772   KO=-     )
> response time 50th percentile                      10485 (OK=10485  KO=-     )
> response time 75th percentile                      10868 (OK=10868  KO=-     )
> response time 95th percentile                      15357 (OK=15357  KO=-     )
> response time 99th percentile                      15398 (OK=15398  KO=-     )
> mean requests/sec                                  31.25 (OK=31.25  KO=-     )
---- Response Time Distribution ------------------------------------------------
> t < 800 ms                                             0 (  0%)
> 800 ms < t < 1200 ms                                   0 (  0%)
> t > 1200 ms                                          500 (100%)
> failed                                                 0 (  0%)
================================================================================

As you can see I reached the same number of requests per seconds. The nonblocking example has a little increment in timings but that's expected because of the overhead of the CompletableFuture + DeferredResult (I guess?).

So my question is: given the constraint of exposing the "blocking service" via rest resource, is there a way to use Flux over https in some way?

@RestController
class ReactiveEndpoint @Autowired constructor(val quoteRepository: QuoteRepository) {
val es = Executors.newFixedThreadPool(500);
val logger = LoggerFactory.getLogger("ReactiveENDPOINT")
@RequestMapping("/nonblocking/{identifier}")
fun getNoBlocking(@PathVariable identifier: String): DeferredResult<List<Quote>> {
logger.info("Starting reactive call")
val dr = DeferredResult<List<Quote>>()
CompletableFuture.supplyAsync(
Supplier<List<Quote>> { quoteRepository.getAll(identifier) }, es)
.whenComplete { list, throwable -> dr.setResult(list) }
logger.info("Finishing reactive call")
return dr
}
@RequestMapping("/blocking/{identifier}")
fun gegBlocking(@PathVariable identifier: String): List<Quote> {
logger.info("Starting blocking call")
val result = quoteRepository.getAll(identifier)
logger.info("Finishing blocking call")
return result
}
}
package default
import scala.concurrent.duration._
import io.gatling.core.Predef._
import io.gatling.http.Predef._
import io.gatling.jdbc.Predef._
class RecordedSimulation extends Simulation {
val httpProtocol = http
.baseURL("http://localhost:8080")
.inferHtmlResources()
.acceptHeader("text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8")
.acceptEncodingHeader("gzip, deflate, sdch")
.acceptLanguageHeader("en-US,en;q=0.8,it;q=0.6,es;q=0.4")
.userAgentHeader("Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.84 Safari/537.36")
val headers_0 = Map("Upgrade-Insecure-Requests" -> "1")
val uri1 = "http://localhost:8080/blocking/1"
val scn = scenario("RecordedSimulation")
.exec(http("request_0")
.get("/blocking/1")
.headers(headers_0))
setUp(scn.inject(atOnceUsers(500))).protocols(httpProtocol)
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment