Skip to content

Instantly share code, notes, and snippets.

@syusui-s
Last active January 17, 2024 16:32
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save syusui-s/80b75d4f9cc3338678aec1f771754a3b to your computer and use it in GitHub Desktop.
Save syusui-s/80b75d4f9cc3338678aec1f771754a3b to your computer and use it in GitHub Desktop.

Yield benchmark

This is the new version of this: JavaScriptにおける各種yield方式のパフォーマンス計測

How to exec

make

Why it is needed to run function separately

bench-simple.mjs receives fnName as a command line argument. Why is this needed?

Execution of "yield" seemed to be getting slower if every benchmarks are executed in a single process.

The result of mitata shows much difference between yieldMsgChannel and yieldMsgChannelOnMsg.

yieldMsgChannel                    66.04 µs/iter    (14.88 µs … 3.21 ms)
yieldMsgChannelOnMsg              122.89 µs/iter    (84.55 µs … 3.35 ms)

The result when I swap execution order of these functions:

yieldMsgChannelOnMsg               68.27 µs/iter    (14.76 µs … 4.12 ms)
yieldMsgChannel                   135.36 µs/iter    (87.11 µs … 3.83 ms)

I think creation of a lot of MessageChannel is the cause of that.

Therefore, benchmarks are executed separately in bench-simple.

Result on my PC

concurrency = 50

yieldSetTimeout                        1071.667169 µs/iter
yieldMsgChannel                          81.651383 µs/iter
yieldThreadOnMessage                     81.587442 µs/iter
yieldMsgChannelGlobal                     1.249464 µs/iter

concurrentYieldSetTimeout              1089.949913 µs/iter
concurrentYieldMsgChannel              1457.519035 µs/iter
concurrentYieldMsgChannelOnMsg         1468.519564 µs/iter
concurrentYieldMsgChannelGlobal          75.619222 µs/iter
cpu: AMD Ryzen 7 PRO 7840U w/ Radeon 780M Graphics
runtime: node v21.3.0 (x64-linux)

benchmark                             time (avg)             (min … max)
------------------------------------------------------------------------
yieldSetTimeout                     1.07 ms/iter     (1.03 ms … 1.63 ms)
yieldMsgChannel                    66.04 µs/iter    (14.88 µs … 3.21 ms)
yieldMsgChannelOnMsg              122.89 µs/iter    (84.55 µs … 3.35 ms)
yieldMsgChannelGlobal                1.9 µs/iter     (1.46 µs … 3.73 µs)

concurrent-yieldSetTimeout          1.09 ms/iter     (18.84 µs … 1.8 ms)
concurrent-yieldMsgChannel          1.84 ms/iter    (494.2 µs … 9.37 ms)
concurrent-yieldMsgChannelOnMsg     6.64 ms/iter    (2.94 ms … 12.36 ms)
concurrent-yieldMsgChannelGlobal   309.1 µs/iter    (55.63 µs … 9.44 ms)
node_modules/
import { bench, run } from 'mitata';
import yieldSetTimeout from './yieldSetTimeout.mjs';
import yieldMsgChannel from './yieldMsgChannel.mjs';
import yieldMsgChannelOnMsg from './yieldMsgChannelOnMsg.mjs';
import yieldMsgChannelGlobal from './yieldMsgChannelGlobal.mjs';
import { execConcurrent } from './utils.mjs';
bench('yieldSetTimeout', async () => {
await yieldSetTimeout();
});
bench('yieldMsgChannelOnMsg', async () => {
await yieldMsgChannelOnMsg();
});
bench('yieldMsgChannel', async () => {
await yieldMsgChannel();
});
bench('yieldMsgChannelGlobal', async () => {
await yieldMsgChannelGlobal();
});
bench('concurrent-yieldSetTimeout', async () => {
await execConcurrent(yieldSetTimeout);
});
bench('concurrent-yieldMsgChannel', async () => {
await execConcurrent(yieldMsgChannel);
});
bench('concurrent-yieldMsgChannelOnMsg', async () => {
await execConcurrent(yieldMsgChannelOnMsg);
});
bench('concurrent-yieldMsgChannelGlobal', async () => {
await execConcurrent(yieldMsgChannelGlobal);
});
await run({
avg: true, // enable/disable avg column (default: true)
json: false, // enable/disable json output (default: false)
colors: true, // enable/disable colors (default: true)
min_max: true, // enable/disable min/max column (default: true)
collect: false, // enable/disable collecting returned values into an array during the benchmark (default: false)
percentiles: false, // enable/disable percentiles column (default: true)
});
import process from 'node:process';
import yieldSetTimeout from './yieldSetTimeout.mjs';
import yieldMsgChannel from './yieldMsgChannel.mjs';
import yieldMsgChannelOnMsg from './yieldMsgChannelOnMsg.mjs';
import yieldMsgChannelGlobal from './yieldMsgChannelGlobal.mjs';
import { formatNumber, padSpaceStart, padSpaceEnd, bench, execConcurrent } from './utils.mjs';
const functions = {
yieldMsgChannel,
yieldMsgChannelOnMsg,
yieldMsgChannelGlobal,
yieldSetTimeout,
concurrentYieldMsgChannel() {
return execConcurrent(yieldMsgChannel);
},
concurrentYieldMsgChannelOnMsg() {
return execConcurrent(yieldMsgChannelOnMsg);
},
concurrentYieldMsgChannelGlobal() {
return execConcurrent(yieldMsgChannelGlobal);
},
concurrentYieldSetTimeout() {
return execConcurrent(yieldSetTimeout);
},
};
(async () => {
const fnName = process.argv[2];
const fn = functions[fnName];
if (fn == null) {
throw new Error(`unknown fnName: ${fnName}`);
}
await bench(fn);
})();
.PHONY: all
all: bench-simple
.PHONY: bench-simple bench-mitata
.SILENT:
bench-simple:
npx -c "node bench-simple.mjs yieldSetTimeout"
npx -c "node bench-simple.mjs yieldMsgChannel"
npx -c "node bench-simple.mjs yieldMsgChannelOnMsg"
npx -c "node bench-simple.mjs yieldMsgChannelGlobal"
npx -c "node bench-simple.mjs concurrentYieldSetTimeout"
npx -c "node bench-simple.mjs concurrentYieldMsgChannel"
npx -c "node bench-simple.mjs concurrentYieldMsgChannelOnMsg"
npx -c "node bench-simple.mjs concurrentYieldMsgChannelGlobal"
.PHONY: bench-mitata
bench-mitata:
npx -c "node bench-mitata.mjs"
{
"name": "yield-bench",
"version": "1.0.0",
"description": "",
"scripts": {},
"license": "ISC",
"devDependencies": {
"mitata": "^0.1.6"
}
}
export const formatNumber = (number, precision = 6) => {
const p = 10**precision;
return Math.round(number * p) / p;
};
export const padSpaceStart = (s, len = 15) => s.toString().padStart(len, ' ');
export const padSpaceEnd = (s, len = 35) => s.toString().padEnd(len, ' ');
export const execConcurrent = (fn, concurrency = 50) => {
const promises = [];
for (let i = 0; i < concurrency; i++) {
promises.push(fn());
}
return Promise.all(promises);
};
// performance.now() returns current time in milliseconds precision.
const multiplierMsToUs = 1e3;
const oneSecInUs = 1e6;
export const bench = async (fn) => {
let count = 0;
let sum = 0;
while (sum < oneSecInUs) {
const start = globalThis.performance.now();
await fn();
const end = globalThis.performance.now();
sum += (end - start) * multiplierMsToUs;
count += 1;
}
console.log(`${padSpaceEnd(fn.name)}${padSpaceStart(formatNumber(sum / count))} µs/iter`);
};
const yieldMsgChannel = () => {
return new Promise((resolve) => {
const ch = new MessageChannel();
const listener = () => {
ch.port1.removeEventListener('message', listener);
resolve();
};
ch.port1.addEventListener('message', listener);
ch.port2.postMessage(0);
ch.port1.start();
});
};
export default yieldMsgChannel;
const gch = new MessageChannel();
let id = 0;
// (node) MaxListenersExceededWarning: Possible EventTarget memory leak detected. 11 message listeners added to [MessagePort [EventTarget]]. Use events.setMaxListeners() to increase limit
gch.port1.setMaxListeners(500);
const yieldMsgChannelGlobal = () => {
return new Promise((resolve) => {
id += 1;
const currentId = id;
const listener = (event) => {
if (event.data == currentId) {
gch.port1.removeEventListener('message', listener);
resolve();
}
};
gch.port1.addEventListener('message', listener);
gch.port2.postMessage(currentId);
gch.port1.start();
});
};
export default yieldMsgChannelGlobal;
const yieldThreadOnMessage = () => {
return new Promise((resolve) => {
const ch = new MessageChannel();
const handler = () => {
ch.port1.onmessage = undefined;
resolve();
};
ch.port1.onmessage = handler;
ch.port2.postMessage(0);
ch.port1.start();
});
};
export default yieldThreadOnMessage;
const yieldSetTimeout = () =>
new Promise((resolve) => setTimeout(resolve, 0));
export default yieldSetTimeout;
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment