Skip to content

Instantly share code, notes, and snippets.

Last active July 20, 2024 04:40
Show Gist options
  • Save djspiewak/46b543800958cf61af6efa8e072bfd5c to your computer and use it in GitHub Desktop.
Save djspiewak/46b543800958cf61af6efa8e072bfd5c to your computer and use it in GitHub Desktop.

Thread Pools

Thread pools on the JVM should usually be divided into the following three categories:

  1. CPU-bound
  2. Blocking IO
  3. Non-blocking IO polling

Each of these categories has a different optimal configuration and usage pattern.

For CPU-bound tasks, you want a bounded thread pool which is pre-allocated and fixed to exactly the number of CPUs. The only work you will be doing on this pool will be CPU-bound computation, and so there is no sense in exceeding the number of CPUs unless you happen to have a really particular workflow that is amenable to hyperthreading (in which case you could go with double the number of CPUs). Note that the old wisdom of "number of CPUs + 1" comes from mixed-mode thread pools where CPU-bound and IO-bound tasks were merged. We won't be doing that.

The problem with a fixed thread pool is that any blocking IO operation (well, any blocking operation at all) will eat a thread, which is an extremely finite resource. Thus, we want to avoid blocking at all costs on the CPU-bound pool. Unfortunately, this isn't always possible (e.g. when being forced to use a blocking IO library). When this is the case, you should always push your blocking operations (IO or otherwise) over to a separate thread pool. This separate thread pool should be caching and unbounded with no pre-allocated size. To be clear, this is a very dangerous type of thread pool. It isn't going to prevent you from just allocating more and more threads as the others block, which is a very dangerous state of affairs. You need to make sure that any data flow which results in running actions on this pool is externally bounded, meaning that you have semantically higher-level checks in place to ensure that only a fixed number of blocking actions may be outstanding at any point in time (this is often done with a non-blocking bounded queue).

The final category of useful threads (assuming you're not a Swing/SWT application) is asynchronous IO polls. These threads basically just sit there asking the kernel whether or not there is a new outstanding async IO notification, and forward that notification on to the rest of the application. You want to handle this with a very small number of fixed, pre-allocated threads. Many applications handle this task with just a single thread! These threads should be given the maximum priority, since the application latency will be bounded around their scheduling. You need to be careful though to never do any work whatsoever on this thread pool! Never ever ever. The moment you receive an async notification, you should be immediately shifting back to the CPU pool. Every nanosecond you spend on the async IO thread(s) is added latency on your application. For this reason, some applications may find slightly better performance by making their async IO pool 2 or 4 threads in size, rather than the conventional 1.

Global Thread Pools

I've seen a lot of advice floating around about not using global thread pools, such as This advice is rooted in the fact that global thread pools can be accessed by arbitrary code (often library code) and you cannot (easily) ensure that this code is using the thread pool appropriately. How much of a concern this is for you depends a lot on your classpath. Global thread pools are pretty darn convenient, but by the same token, it also isn't all that hard to have your own application-internal global pools. So… it doesn't hurt.

On that note, view with extreme suspicion any framework or library which either a) makes it difficult to configure the thread pool, or b) just straight-up defaults to a pool that you cannot control.

Either way, you're almost always going to have some sort of singleton object somewhere in your application which just has these three pools, pre-configured for use. If you ascribe to the "implicit ExecutionContext pattern", then you should make the CPU pool the implicit one, while the others must be explicitly selected.

Copy link

jumarko commented Jul 22, 2021

This is such a gem! Thanks a ton, @djspiewak for putting the information together and responding to the comments patiently and with great care.

I had the exact same question in mind about "unbounded thread pool with bounded queue" vs "bounded thread pool" as was already asked.
My knowledge of the Java Executors API is a bit rusty so I was looking at this:
Could we perhaps achieve the "propagation of hitting the limits" via RejectedExecutionHandler?
Or the problem is that it would still be too low-level and handled in an inappropriate place?

Now, In my code, I like to limit the number of threads in a thread pool handling blocking IO operations (like HTTP connections).
I agree that having a top-level resource control is great but I think it can still be useful to combine that with lower-level resource control (imposed e.g. by a bounded thread pool handling max 10 concurrent HTTP requests to a particular HTTP API)

Copy link

sshark commented Jul 23, 2021

@djspewak Thanks for elucidating this complex subject for us. I have questions related to this paragraph,

very aggressively blocks in native code due to the fact that it implements its own OS-specific interfaces to asynchronous IO layers (such as epoll and io_uring). Even without third-party frameworks though, examples abound where native blocking is unavoidable. new URL(" is one example, since it delegates to the native OS DNS client, which in turn is blocking on all major

As you mentioned, for the case of asynchronous file IO, we have support from the OSes epoll and io_uring. Why it is not doable for network IO in new URL(" Is it because such functions are not available in the OS? How do R2DBC drivers do reactive remote database calls?

Copy link

anx21 commented Jul 24, 2021

In the case of blocking IO, is Cats Effect 3 more efficient than Loom because "work stealing" is possible with a scheduler-per-carrier-thread? Are there any other features of Cats Effect 3 that make it more efficient in blocking IO?

Copy link

didibus commented Nov 4, 2021

This is exactly my point: it won't. URL is in the JVM and it will not "just work" with Loom. Ditto with InetAddress (for the same reason). FileInputStream is also in the JVM and it won't "just work" with Loom, at least not on macOS or older versions of the Linux kernel. And it's not like these are uncommon cases.

To be honest, I wouldn't have even thought of moving URL or InetAddress use into my blocking IO pool? Do you go to that extent of making sure all blocking IO runs in an unbounded IO pool? Normally I just block the thread on those, so I don't really see where Loom would be inferior to the status quo on that I guess.

Copy link

Hi @djspiewak, Thanks for the excellent article about ThreadPools.

I asked a question regarding the ForkJoinPool on Stack Overflow about the blocking {} block. I still didn't get a satisfying answer.

Should you use blocking {} combined with a ForkJoinPool for blocking operations, like API requests and database calls, ... or instead use a separate unbound cache pool, as you mentioned?

Using the ForkJoinPook with blocking has the advantage that IO results can be processed on the same thread. But almost every framework requires/builds on a separate thread pool.

What is your take on this? I would appreciate it!

Copy link

Hi @djspiewak I have a question regarding Why the Compute bounded thread needs to be "number of CPUs".

My understanding initial understanding of this reasoning is to avoid switching threads out of CPUs but wouldn't that be still happening as we have I/O threads or even other processes and Event Loop?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment