Skip to content

Instantly share code, notes, and snippets.

@Aschen
Last active May 3, 2023 11:23
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save Aschen/5cc1f3f3b58f1e284b670b83bb53da7d to your computer and use it in GitHub Desktop.
Save Aschen/5cc1f3f3b58f1e284b670b83bb53da7d to your computer and use it in GitHub Desktop.
Benchmarking AsyncLocalStorage

What is the overhead of AsyncLocalStorage?

Run the benchmark:

$ npm i benchmark

# Run the AsyncLocalStorage benchmark
$ node async-local-storage.js ASL
ASL x 15,551 ops/sec ±3.30% (79 runs sampled)

# Run the witness benchmark
$ node async-local-storage.js Witness
Witness x 21,352 ops/sec ±1.27% (87 runs sampled)

There is ~25% perf drop by using AsyncLocalStorage

Why you shouldn't use these results to take a decision?

Micro benchmarking is very difficult and also poorly representative of real world situation.

If you want to know if you can afford to use AsyncLocalStorage then you should try it inside you application and run a benchmark before and after.

For example inside Kuzzle the usage of AsyncLocalStorage results in only 8% performance drop

Why I have to run the benchmark separately

I didn't get the same result when I run the benchmarks in the same program.
The second benchmark to run is always slower compare to when I run it by itself.

I suspect the garbage collector working to release the memory but I'm not sure.
If you think you know why please contact me by email or find me on Kuzzle Discord (@aschen)

const Benchmark = require('benchmark');
const { AsyncLocalStorage } = require('async_hooks');
const suite = new Benchmark.Suite;
const ASL = new AsyncLocalStorage();
const makePromise = value => new Promise(resolve => process.nextTick(resolve(value)));
const run = (type, callback) => {
if (type === 'asl') {
ASL.run(new Map(), callback);
}
else if (type === 'witness') {
callback();
}
};
const json = '{"foo":"foo","bar":"bar","foobar":"foobar"}';
const func = async (json) => {
// use an awaited promise to avoid v8 optimization
const keys = await makePromise(Object.keys(JSON.parse(json)));
return keys.length + 84;
}
const funcASL = async (json) => {
const keys = await makePromise(Object.keys(JSON.parse(json)));
const value = ASL.getStore().get('KEY');
return keys.length + value;
}
const methods = {
'ASL': function() {
return new Promise(resolve => {
run('asl', async () => {
ASL.getStore().set('KEY', 84);
await funcASL(json);
resolve();
});
});
},
'Witness': function() {
return new Promise(resolve => {
run('witness', async () => {
await func(json);
resolve();
});
});
}
}
suite
.add(process.argv[2], methods[process.argv[2]])
.on('cycle', function(event) {
console.log(String(event.target));
})
.run({ 'async': true });
@casey-chow
Copy link

I took a closer look at this and I realized that the reason that the benchmark showed such a disparity was that the control didn’t include a Map allocation and setting. I modified the witness to include the same map allocation and setting and I get much more equal results: https://replit.com/@CaseyChow1/AsyncLocalStorage-Benchmark

~/AsyncLocalStorage-Benchmark$ node index.js ASL
ASL x 8,218 ops/sec ±7.64% (37 runs sampled)
~/AsyncLocalStorage-Benchmark$ node index.js Witness
Witness x 8,020 ops/sec ±7.81% (49 runs sampled)

@Aschen
Copy link
Author

Aschen commented Jan 4, 2023

Good catch! I'm impressed by the fact that there is almost no override at all with the ASL API :-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment