Why is Netty necessary?
Deficiencies of raw NIO:
- complicated APIs
- Too low-level to be productive for application development.
whiteboard example of
java.nio.Buffer: 0 <= mark <= position <= limit <= capacity
org.jboss.netty.buffer.ChannelBuffer: 0 <= readerIndex <= writerIndex <= capacity
New hotness: io.netty.buffer
What does Netty provide?
"Netty is a NIO client-server framework" which "greatly simplifies network programming". Aimed at highly-concurrent applications, predominantly asynchronous APIs, event-loop
Netty is oriented around the concept of a Channel. Channels provide an asynchronous programming interface around network sockets. (They're really more general than that, as a Channel can represent any I/O component).
Common currencies of Netty: ChannelBuffer, ChannelFuture
Describe NioClientSocketChannelFactory as a means of pointing out how general Netty really is. Creates client-side NIO-based SocketChannel: each part of this description is specific to a technology:
- NIO, meaning Java async I/O
- SocketChannel, meaning TCP/IP sockets
This class also happens to be where the two types of threads in Netty reside.
Two thread pools:
- Boss threads open sockets and then pass them off to worker threads. Servers have one boss thread per listening socket. Clients have one boss thread, period.
- Worker threads perform all I/O. They are not general-purpose threads that can run blocking application code. This is the origin story of the "don't block a Finagle thread" rule.
Boss thread uses a round-robin strategy for picking worker threads.
I/O in worker threads is achieved through NIO Selectors. Selectors multiplex
over channels, allowing a single thread to handle multiple channels. Selectors
maintain a set representing the channels that have been registered. Calling
select() picks one from the subset of these channels that have been detected
as having completed their I/O work.
This effectively provides a mechanism by which program flow can be blocked until I/O operations have completed.
"Select" refers to the POSIX syscall, for examining the status of file descriptors of open I/O channels.
Netty replaces java.nio.channels.Selector with its own because of an epoll bug.
Threads acquired from Executor instance(s) passed to ChannelFactory constructor.
Threads are acquired lazily and then released "when there's nothing left to
process." Shutting a service down gracefully is a bit subtle, as you need to
explicitly close all channels created by a given factory and then call
Bridging the gap between Netty and Finagle
In the same way that Netty raises the level of abstraction above NIO, Finagle does so for Netty. Finagle layers the concept of "server as a function" atop Netty.
Currently, Finagle's thread pool == Netty worker pool. Fixed thread pool allows you to exploit locality.
When Finagle dispatches a request, it creates a Promise with Locals for request-specific data. When an I/O task completes (via Selector on Netty worker), these Promises are retrieved and fulfilled with this same thread-local context.
Finagle uses Netty for framing bytestream. Older protocols (e.g. finagle-thrift, finagle-http) implemented framing and protocol decoding/encoding within the Netty pipeline. More recent projects (finagle-mysql, finagle-mux) keep framing at the Netty layer, but decode and encode in the dispatch layer of Finagle.