Skip to content

Instantly share code, notes, and snippets.

@weyoss
Last active February 19, 2024 16:08
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save weyoss/24f9ecbda175d943a48cb7ec38bde821 to your computer and use it in GitHub Desktop.
Save weyoss/24f9ecbda175d943a48cb7ec38bde821 to your computer and use it in GitHub Desktop.
Callback vs Promise vs Async/Await
Callback vs Promise vs Async/Await benchmarks
Benchmark Files
https://github.com/petkaantonov/bluebird/tree/master/benchmark
Platform Info
Linux 5.13.0-40-generic x64
Intel(R) Core(TM) i5-3210M CPU @ 2.50GHz × 4
Summary
In terms of performance and memory usage, there is no alternative to callbacks.
Promise and async/await are slower and use more resources.
ls ./doxbee-sequential/*.js | sed -e 's|\.js||' | xargs node ./performance.js --p 1 --t 1 --n 10000
results for 10000 parallel executions, 1 ms per I/O op
file time(ms) memory(MB)
callbacks-baseline 329 24.58
callbacks-caolan-async-waterfall 420 50.55
callbacks-suguru03-neo-async-waterfall 426 41.49
promises-bluebird-generator 499 42.61
promises-native-async-await 558 56.47
promises-bluebird 570 50.09
promises-ecmascript6-native 614 67.87
promises-lvivski-davy 622 91.29
promises-cujojs-when 705 67.07
promises-then-promise 782 75.79
generators-tj-co 804 59.77
promises-tildeio-rsvp 911 91.99
promises-calvinmetcalf-lie 1099 141.34
promises-dfilatov-vow 1445 141.14
promises-obvious-kew 1471 104.65
observables-pozadi-kefir 1499 146.91
streamline-generators 1540 77.86
promises-medikoo-deferred 1758 132.80
streamline-callbacks 2190 102.14
observables-Reactive-Extensions-RxJS 2732 218.70
promises-kriskowal-q 5838 359.06
observables-caolan-highland 6688 488.01
observables-baconjs-bacon.js 10233 761.92
Platform info:
Linux 5.13.0-40-generic x64
Node.JS 16.14.0
V8 9.4.146.24-node.20
Intel(R) Core(TM) i5-3210M CPU @ 2.50GHz × 4
ls ./madeup-parallel/*.js | sed -e 's|\.js||' | xargs node ./performance.js --p 25 --t 1 --n 10000
results for 10000 parallel executions, 1 ms per I/O op
file time(ms) memory(MB)
callbacks-baseline 624 82.00
callbacks-suguru03-neo-async-parallel 719 88.20
promises-bluebird 1037 105.51
promises-lvivski-davy 1099 156.89
callbacks-caolan-async-parallel 1141 116.48
promises-bluebird-generator 1198 106.91
promises-cujojs-when 1391 157.09
promises-ecmascript6-native 2280 212.14
generators-tj-co 2289 225.09
promises-native-async-await 2346 218.36
promises-then-promise 2358 235.82
promises-calvinmetcalf-lie 2927 330.71
promises-tildeio-rsvp 3006 315.84
promises-medikoo-deferred 3859 356.98
promises-dfilatov-vow 5261 476.34
promises-obvious-kew 5971 657.50
streamline-generators 14209 857.03
streamline-callbacks 20183 1066.83
Platform info:
Linux 5.13.0-40-generic x64
Node.JS 16.14.0
V8 9.4.146.24-node.20
Intel(R) Core(TM) i5-3210M CPU @ 2.50GHz × 4
@ruxxzebre
Copy link

Hi! Can you share exact scripts you've used for benchmarking?

@iambumblehead
Copy link

iambumblehead commented Nov 7, 2023

This benchmark can be used with node's official benchmark.js scripts The last number is the rate of operations measured in ops/sec (higher is better).

async-vs-cb.nested-benchmark.js
const common = require('./node/benchmark/common.js');

const bench = common.createBenchmark(main, {
  n: [ 1400 ],
  type: [ 'await-deep', 'await-shallow', 'cb-deep', 'cb-deep-promisified' ]
});

async function main(conf) {
  let res = Math.random()

  const string = conf.string
  const type = conf.type

  if (type === 'await-deep') {
    bench.start();
    res = await (async function nestedAsync (val, count) {
      if (!count--) return val

      return nestedAsync(Math.random() > Math.random() ? 1 : -1, count)
    })(Math.random(), conf.n)
    bench.end(conf.n);
  }

  if (type === 'await-shallow') {
    const arr = Array.from({length: conf.n}, (v, i) => i)
    const oneAsyncRes = async (val, count) => (
      count + Math.random() > Math.random() ? 1 : -1)
    
    bench.start();
    for (const n of arr)
      res = await oneAsyncRes(n, res)
    bench.end(conf.n);
  }

  if (type === 'cb-deep') {
    bench.start();
    (function nestedCb(val, count, cb) {
      if (!count--) return cb(null, val)

      return nestedCb(Math.random() > Math.random() ? 1 : -1, count, cb)
    })(Math.random(), conf.n, (err, res) => {
      bench.end(conf.n)
    })
  }

  if (type === 'cb-deep-promisified') {
    bench.start();
    const res = await new Promise(resolve => (
      (function nestedCb(val, count, cb) {
        if (!count--) return cb(null, val)

        return nestedCb(Math.random() > Math.random() ? 1 : -1, count, cb)
      })(Math.random(), conf.n, (err, res) => {
        resolve(res)
      })
    ))
    bench.end(conf.n)    
  }  

  return res
}
$ node async-vs-cb.nested-benchmark.js
.js type="await-deep"          n=1400: 1,516,848.9413477853
.js type="await-shallow"       n=1400: 1,137,102.9378678831
.js type="cb-deep"             n=1400: 2,214,625.7045278023
.js type="cb-deep-promisified" n=1400: 1,907,328.3642888186

Conclusion: performance-sensitive, deeply-stacked callback code should not be converted to async-await.


also updating benchmarks to send and receive destructured values shows significant performance drop, so that async/await with array destructuring is very slow only getting about ~200,000 ops/sec on this local machine.

To elaborate, callback functions may receive multiple values, for example,

callback(function (a, b, c, d) {
  // ...
})

async function callers must receive one value and separate values are found with lookups or slower destructuring,

[a, b, c, b] = await asyncfn();

sending and receiving destructured values drops performance significantly, so that async/await with array destructuring is very slow only getting about ~200,000-300,000 ops/sec on the local machine here.

@weyoss
Copy link
Author

weyoss commented Dec 2, 2023

@iambumblehead

Conclusion: performance-sensitive, deeply-stacked callback code should not be converted to async-await.

I completely agree.

@iambumblehead
Copy link

github doesn't give me a place to "thumbs up" your reply, however, "thumbs up" :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment