Skip to content

Instantly share code, notes, and snippets.

@chaelim
Created April 20, 2017 01:16
Show Gist options
  • Save chaelim/e19bb603fb20a912acce54e086ffe3d5 to your computer and use it in GitHub Desktop.
Save chaelim/e19bb603fb20a912acce54e086ffe3d5 to your computer and use it in GitHub Desktop.
Note on Async IO Programming

Async IO Programming


Design Pattern

  • Proactor
  • Proactor vs Reactor
  • The Reactor patterns involve synchronous I/O, whereas the Proactor pattern involves asynchronous I/O. In Reactor, the event demultiplexor waits for events that indicate when a file descriptor or socket is ready for a read or write operation. The demultiplexor passes this event to the appropriate handler, which is responsible for performing the actual read or write.
  • In the Proactor pattern, by contrast, the handler—or the event demultiplexor on behalf of the handler—initiates asynchronous read and write operations. The I/O operation itself is performed by the operating system (OS). The parameters passed to the OS include the addresses of user-defined data buffers from which the OS gets data to write, or to which the OS puts data read. The event demultiplexor waits for events that indicate the completion of the I/O operation, and forwards those events to the appropriate handlers. For example, on Windows a handler could initiate async I/O (overlapped in Microsoft terminology) operations, and the event demultiplexor could wait for IOCompletion events [1]. The implementation of this classic asynchronous pattern is based on an asynchronous OS-level API, and we will call this implementation the "system-level" or "true" async, because the application fully relies on the OS to execute actual I/O.
  • An example will help you understand the difference between Reactor and Proactor. We will focus on the read operation here, as the write implementation is similar. Here's a read in Reactor:
    • An event handler declares interest in I/O events that indicate readiness for read on a particular socket
    • The event demultiplexor waits for events
    • An event comes in and wakes-up the demultiplexor, and the demultiplexor calls the appropriate handler
    • The event handler performs the actual read operation, handles the data read, declares renewed interest in I/O events, and returns control to the dispatcher
  • By comparison, here is a read operation in Proactor (true async):
    • A handler initiates an asynchronous read operation (note: the OS must support asynchronous I/O). In this case, the handler does not care about I/O readiness events, but is instead registers interest in receiving completion events.
    • The event demultiplexor waits until the operation is completed
    • While the event demultiplexor waits, the OS executes the read operation in a parallel kernel thread, puts data into a user-defined buffer, and notifies the event demultiplexor that the read is complete
    • The event demultiplexor calls the appropriate handler;
    • The event handler handles the data from user defined buffer, starts a new asynchronous operation, and returns control to the event demultiplexor.

PyParellel

I think this is one of the best summary on way to implement async high-performance IO mechanism in both Windows (IOCP) and Unix/Linux/OS X (epoll/kpoll) platform. He is an Unix guy and surprisingly implemented PyParallel on Windows using IOCP. This slide is very educative to understand the differences between platform and lot of pages explaining IOCP. -CSLIM

https://speakerdeck.com/trent/pyparallel-how-we-removed-the-gil-and-exploited-all-cores

* The slide is pretty big and page 47 to 71 was the most interesting part.
* Page 23-45: Some historical stuffs and fundamental differences between Unix/Linux/OS X and Windows
* Page 47-71: Mostly about Windows IOCP.
* Page 72- : PyParallel stuffs

Async IO model

  • Windows: Completion oriented
  • Linux/Unix/Posix : Readiness oriented
    • I/O Multiplexing Over the Years
      • select()
      • poll()
      • /dev/poll
      • epoll
      • kqueue
    • “A Scalable and Explicit Event Delivery Mechanism for UNIX”
      • FreeBSD: kqueue
      • Linux: epoll
      • Solaris: /dev/poll
      • Separate declaration of interest from inquiry about readiness
      • Kernel work when checking readiness is now O(1)
      • epoll and kqueue quickly became the preferred methods for I/O multiplexing
  • protocols: Completion oriented
  • The Event Loop
    • Twisted, Tornado, Tulip, libevent, libuv, ZeroMQ, node.js
    • All single-threaded, all use non-blocking sockets
    • Event loop ties everything together
    • It’s literally an endless loop that runs until program termination
    • Calls an I/O multiplexing method upon each “run” of the loop
    • Enumerate results and determine what needs to be done
      • Data ready for reading without blocking? Great!
      • read() it, then invoke the relevant protocol.data_received()
    • Data can be written without blocking? Great! Write it!
    • Nothing to do? Fine, skip to the next file descriptor.
  • Intro
    • The Windows equivalent of kernel event notification mechanisms like kqueue or (e)poll is IOCP libuv enforces an asynchronous, event-driven style of programming
  • Docs
  • Examples
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment