- _tickCallback processes the process.nextCallback queue
- setTimeout cannot have a timeout smaller than 1
- domains make changes
Understanding the Node event loop
There are two important things to remember about the Node event loop. The first, and most important, thing is that as a developer you shouldn't need to worry about the implementation details of how Node runs your functions. Follow a few simple guidelines, and everything should be fine.
The other important thing is that the event loop's implementation is neither simple nor consistent. Node's emphasis is on minimizing overhead of heavily-used (hot) code paths and exhibiting deterministic behavior. Explaining what happens on a given turn of the event loop is not straightforward, but each major version of Node brings substantial performance gains. This isn't much consolation when you're stuck debugging something that sits close to the event loop, but it does mean your programs run efficiently – when they're working.
- want a function to run soon, but don't want to block the event loop, use
- want a function to always be asynchronous without incurring overhead, use
- want a function to run x (where x > 1) milliseconds in the future, use
As a corollary,
setTimeout(fn, 0) rarely does what you want, and should be
considered a code smell. Most of the time you should just use
Unfortunately, there is no single function in Node that runs the event loop. In
fact, even discussing "the" event loop is slightly misleading – there is no
libuv orchestrates things through a set of asynchronous calls.
A turn through the event loop
Here's what happens, in order, each turn through the event loop:
- timer handles (drives setTimeout / setInterval)
- I/O callbacks and / or polling (may block if no work is queued)
- check handles (drives setImmediate)
- handle close callbacks
process.nextTick()? Basically after each of the above steps
How long is the Node.js work queue?
Most of the interest in the event loop's implementation stems from a common impulse: developers want to know how busy their Node processes are, so they know where they need to be putting their attention to make them faster. Unfortunately, trying to figure out how much load the event loop is currently under is a simple question with a surprisingly complex answer.
The short answer is that knowing the length of a notional "event loop queue"
doesn't really do a very good job of telling you how busy your application is.
The different kinds of tasks that
libuv manages (I/O, deferred execution via
setInterval, "asynchronizers" like
setImmediate(), signal handlers) are handled at different stages and with
There are a number of different sources of work feeding into the event queue:
- there's the
process.nextTick()queue, which is processed completely at a variety of points through each turn of the event loop
- there are tasks set to execute on the next turn of the event loop via
- there are periodically-expiring timers set with
ReqWrapclass on the C++ side, Node manages lists of I/O requests handled by
The problem is that turnings of the event loop aren't homogenous, and the event
loop queues on their own don't really tell you the most important thing, which
is whether your application is getting bogged down (for whatever definition of
"bogged down" works well for you). There are modules like
that try to monitor the latency of the event loop and tell you if it's taking
too long (which can actually be helpful if you're doing a lot with
process.nextTick(), or are trying to do something CPU-intensive in single
turns of the event loop), but by and large, this kind of information is hard to
gather from inside Node and turns out to be of limited use.
There's also DTrace, which can give you much finer-grained (or coarser-grained!) statistics about what's going on with the event loop.
Most of the time, the only time you'll encounter performance problems tied to event loop processing is when you're trying to do too much computation during a single turn of the event loop. I'd generally look pretty much everywhere else first when trying to do performance tuning.