Running the microbenchmark produces
$ npm run benchmark
> @ benchmark .../promise-forwarding-microbenchmark
> ts-node index.ts
withForward x 436,408 ops/sec ±0.67% (76 runs sampled)
withResolve x 392,169 ops/sec ±0.48% (83 runs sampled)
The comparison is between a call chain that await
-s all Promise
-s before
returning
async function threeArgs(x: number, y: number, z: number): Promise<number> {
return x + y + z;
}
async function twoArgsResolve(x: number, y: number): Promise<number> {
const value = await threeArgs(x, y, 42.0);
return value;
}
async function oneArgResolve(x: number): Promise<number> {
const value = await twoArgsResolve(x, 7.5);
return value;
}
and one where the Promise
is just forwarded along
async function threeArgs(x: number, y: number, z: number): Promise<number> {
return x + y + z;
}
async function twoArgsForward(x: number, y: number): Promise<number> {
return threeArgs(x, y, 42.0);
}
async function oneArgForward(x: number): Promise<number> {
return twoArgsForward(x, 7.5);
}
I had to dig a little bit to figure out how to test async
functions
with Benchmark
. As a result, I was worried that the
{ defer: true, fn: wrap(fn1) }
approach wasn't working as expected.
So I added a sanity check to make sure that an async
function that sleeps
50ms has twice the throughput as a function that sleeps 100ms:
$ npm run sanity-check
> @ sanity-check .../promise-forwarding-microbenchmark
> SANITY_CHECK=true ts-node index.ts
sleep50 x 19.03 ops/sec ±0.72% (68 runs sampled)
sleep100 x 9.77 ops/sec ±0.47% (49 runs sampled)