Skip to content

Instantly share code, notes, and snippets.

@othiym23
Last active June 6, 2018 10:00
Show Gist options
  • Save othiym23/6102710 to your computer and use it in GitHub Desktop.
Save othiym23/6102710 to your computer and use it in GitHub Desktop.

UNINCORPORATED FACTS

  • _tickCallback processes the process.nextCallback queue
  • setTimeout cannot have a timeout smaller than 1
  • domains make changes

Understanding the Node event loop

There are two important things to remember about the Node event loop. The first, and most important, thing is that as a developer you shouldn't need to worry about the implementation details of how Node runs your functions. Follow a few simple guidelines, and everything should be fine.

The other important thing is that the event loop's implementation is neither simple nor consistent. Node's emphasis is on minimizing overhead of heavily-used (hot) code paths and exhibiting deterministic behavior. Explaining what happens on a given turn of the event loop is not straightforward, but each major version of Node brings substantial performance gains. This isn't much consolation when you're stuck debugging something that sits close to the event loop, but it does mean your programs run efficiently – when they're working.

The guidelines

If you just want to write JavaScript to run on Node, here's all you need to know.

When you:

  • want a function to run soon, but don't want to block the event loop, use setImmediate
  • want a function to always be asynchronous without incurring overhead, use process.nextTick
  • want a function to run x (where x > 1) milliseconds in the future, use setTimeout

As a corollary, setTimeout(fn, 0) rarely does what you want, and should be considered a code smell. Most of the time you should just use setImmediate instead.

Implementation

Unfortunately, there is no single function in Node that runs the event loop. In fact, even discussing "the" event loop is slightly misleading – there is no single function that drives the entire loop, either from C++ or JavaScript. Instead, libuv orchestrates things through a set of asynchronous calls.

A turn through the event loop

Here's what happens, in order, each turn through the event loop:

  • timer handles (drives setTimeout / setInterval)
  • I/O callbacks and / or polling (may block if no work is queued)
  • check handles (drives setImmediate)
  • handle close callbacks

Where's process.nextTick()? Basically after each of the above steps

How long is the Node.js work queue?

Most of the interest in the event loop's implementation stems from a common impulse: developers want to know how busy their Node processes are, so they know where they need to be putting their attention to make them faster. Unfortunately, trying to figure out how much load the event loop is currently under is a simple question with a surprisingly complex answer.

The short answer is that knowing the length of a notional "event loop queue" doesn't really do a very good job of telling you how busy your application is. The different kinds of tasks that libuv manages (I/O, deferred execution via setTimeout / setInterval, "asynchronizers" like process.nextTick() and setImmediate(), signal handlers) are handled at different stages and with different priorities.

There are a number of different sources of work feeding into the event queue:

  • there's the process.nextTick() queue, which is processed completely at a variety of points through each turn of the event loop
  • there are tasks set to execute on the next turn of the event loop via setImmediate()
  • there are periodically-expiring timers set with setTimeout() and setInterval()
  • using MakeCallback() and the ReqWrap class on the C++ side, Node manages lists of I/O requests handled by libuv, which polls for pending I/O and hands it off to JavaScript callbacks for processing

The problem is that turnings of the event loop aren't homogenous, and the event loop queues on their own don't really tell you the most important thing, which is whether your application is getting bogged down (for whatever definition of "bogged down" works well for you). There are modules like node-toobusy that try to monitor the latency of the event loop and tell you if it's taking too long (which can actually be helpful if you're doing a lot with process.nextTick(), or are trying to do something CPU-intensive in single turns of the event loop), but by and large, this kind of information is hard to gather from inside Node and turns out to be of limited use.

There's also DTrace, which can give you much finer-grained (or coarser-grained!) statistics about what's going on with the event loop.

Most of the time, the only time you'll encounter performance problems tied to event loop processing is when you're trying to do too much computation during a single turn of the event loop. I'd generally look pretty much everywhere else first when trying to do performance tuning.

@tjfontaine
Copy link

Where's process.nextTick()? Basically after each of the above steps

Is it easier to think of it as after, or before the next step happens?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment