At the simplest level, composable event loops combine the cheap synchronicity of event loops with the composability of preempted threads.
In a traditional OS, asynchronous tasks are generally accomplished by multiple threads or processes. Each thread has its own stack and is preempted periodically to switch between active threads. Low-level synchronization primitives or synchronized queues are often used to communicate between running threads.
Example of a multithreaded teapot:
Benefits:
- Complicated logic can be executed linearly without worrying about delays
- Multiple threads can be composed with minimal impacting to the timing of existing threads
- Supports multiple cores
- Parallelism is transparent to the user
In the classic event loop model, a single process executes short-lived events sequentially. Only a single stack is used and no synchronization is needed between events. The user is responsible for ensuring that events return in a lively manner.
Example of an event loop based teapot:
Benefits:
- Synchronization is elided
- Reduced memory consumption with only a single stack
- No cost for context switching and synchronization
- Parallelism is explicit and controlled
A composable event loop system is a model for structuring multitasking programs such that each module is contained in a single event loop. Multiple modules are separated into distinct parallel threads and can communicate through message passing in the form of registering events across modules.
Example of a teapot using composable event loops:
At minimum, composable event loops need three primitives:
- Modular event loops - dispatching of a module's queued events
- Multithreading - isolation between multiple module's threads of execution
- Synchronized event registration - message passing between modules in the form of enqueuing events
An asynchronous teapot based on the simple events library:
// An asynchronous tea pot built on composable event loops
struct teapot {
// Internal event loop
struct equeue;
// Other tea pot internals
struct heater heater;
struct analog_sensor sensor;
void (*cb)(void *);
void *data;
}
// Publically-accessible boild function
void teapot_boil(void (*cb)(void *), void *data);
// The publically-accessible boil function can be called from other threads.
// Registering with the internal event loop synchronizes the tea pot.
void teapot_boil(struct teapot *teapot, void (*cb)(void *), void *data) {
teapot->cb = cb;
teapot->data = data;
event_call(teapot->equeue, teapot_begin_boil, teapot);
}
// Event registered from the boil method in case the heating element is not thread-safe.
static void teapot_begin_boil(struct teapot *teapot) {
heater_on(&teapot->heater);
sensor_read(&teapot->sensor, teapot_read_callback, teapot);
}
// Event for handling read callbacks. This callback is issued from the analog sensor,
// but runs on the teapot's event loop. The sensor may or may-not use an event loop.
static void teapot_read_callback(struct teapot *teapot) {
event_call(teapot->equeue, teapot_read, teapot);
}
// Synchronized read function
static void teapot_read(struct teapot *teapot) {
// Hopefully this log call will be pretty quick. However, if the log call takes a while,
// we only have to worry about it blocking our tea pot.
log("got: %d\n", data);
if (data > 110) {
heater_on(&teapot->heater);
teapot->cb(teapot->data);
} else {
sensor_read(&teapot->sensor, teapot_read_callback, teapot);
}
}
The core benefit of composable event loops is the alignment of synchronization issues with the separation of concerns between individual modules. When creating a module, a developer needs to worry about external synchronization and internal timing.
Benefits:
- Modules are internally synchronized, removing the need for synchronize individual components or worry about the interactions between internal threads
- Multiple modules can be composed with minimal impact to the timing of existing modules
- The quantity of stacks is statically bounded by the number of modules
- Supports multiple cores bounded by the number of modules
- Easily integrated with interrupts by treating them as another module boundary
- Easily integrated with traditional multithreaded environments
- Parallelism is transparent across module boundaries but controlled inside modules