Skip to content

Instantly share code, notes, and snippets.

What would you like to do?
Ahh, that ARGUMENTS thing - is loop really faster than slice, oohhh !#%
if (process.argv[2] === 'bench') {
var pre = process.hrtime();
if (process.argv[3] === '1') {
(function () {
var args =;
})(1, 2.0, "string", {"foo": true});
else if (process.argv[3] === '2') {
(function () {
var i = arguments.length,
args = new Array(i);
while(i--) args[i] = arguments[i];
})(1, 2.0, "string", {"foo": true});
var post = process.hrtime();
var toNano = function (hr) {
return hr[0] * Math.pow(10, 9) + hr[1];
process.send( toNano(post) - toNano(pre) );
} else {
var fork = require('child_process').fork;
var performBench = function (runs, type, callback) {
var childs = [];
var times = [];
var showBench = function () {
// convert to ms
var i = runs;
while(i--) times[i] = times[i] / 1000;
// calculate mean
var total = 0;
var i = runs;
while(i--) total += times[i];
var mean = Math.round((total / runs) * 10000) / 10000;
// calculate variance
var variance = 0;
var i = runs;
while(i--) variance += Math.pow(times[i] - mean, 2);
variance = variance / runs;
var deviation = Math.round(Math.sqrt(variance) * 100) / 100;
callback(mean, deviation);
var start = function (i) {
var child = fork(__filename, ['bench', type]);
child.once('message', function (msg) {
child.once('exit', function () {
if (i === runs) return showBench();
start(i + 1);
performBench(100, '1', function (mean, deviation) {
console.log('slice call: ' + mean + ' ms (avg) ' + deviation + '±');
performBench(100, '2', function (mean, deviation) {
console.log('while loop: ' + mean + ' ms (avg) ' + deviation + '±');


Okay, so Mr. Bench here tests how fast v8 handles:

var args =;
var i = arguments.length,
    args = new Array(i);

while(i--) args[i] = arguments[i];

However he is a wired judge, hE wilL noT leT V8 optimizE!.

The results are:

slice call: 12.2983 ms (avg) 2.01±
while loop: 15.6222 ms (avg) 1.46±

slice call: 11.9533 ms (avg) 1.46±
while loop: 15.7101 ms (avg) 1.55±

slice call: 11.7302 ms (avg) 2.02±
while loop: 15.5891 ms (avg) 1.54±

slice call: 11.8409 ms (avg) 1.16±
while loop: 15.6987 ms (avg) 1.31±

slice call: 11.9382 ms (avg) 1.76±
while loop: 15.5833 ms (avg) 1.40±

slice call: 12.2972 ms (avg) 2.85±
while loop: 18.3039 ms (avg) 5.91±

And oh, shit! The was somehow faster.

Why this


So I know for a fact that a while is faster than because of the way v8 optimize, actually it is quite significant. See (or something like that, or better yet test it your self in a real world case!).

But people are using too much time on this debate, there must be more important things there matters. That why I use the "could be more funny, but I did an attempt" dictionary.

I wrote this just after, and yes I do know that the cool guys ment it as a partial joke, I appreciate it!

How this

When JavaScript code runs in V8, it will most likely be optimized to some degree, based depending on how hot the code is.

Thats because V8 optimization takes time and the time spent on optimization is just as important as time spent on compiling and execution. It all counts!. V8 will therfore (and for other reasons) not optimize at the first encounter, but as time pass, v8 will based on history progressively optimize the code.

In this benchmark a new v8 instance is booted on each run, v8 has therefor no history and it won't optimize. The conditions are therefor not the same, as in traditional benchmarks where the code is executed a lot and v8 will attempt to optimize it.

This part is almost pure guess:

V8 has something called studs (perhaps not spelled correctly)_ those are prewritten JavaScript core functions hand written in assembly code. They are therefor super fast, and can be compiled in no time. I assume that Array.slice has a studs. However while(i--) ... has no studs because its way too complicated (keep in mind that a lot of v8 philosophy is based on Inventor's paradox) for such specific case. Its therefor executed by the FastCompiler there can turn JavaScript code intro assembly code very fast, but do not use time on optimizing. The execution is therefor slow!.

At last

Don't make the discussion too serious please, seriously!

And checkout for actual facts.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.