Skip to content

Instantly share code, notes, and snippets.

What would you like to do?
Making efficient use of the libdispatch (GCD)

libdispatch efficiency tips

I suspect most developers are using the libdispatch inefficiently due to the way it was presented to us at the time it was introduced and for many years after that, and due to the confusing documentation and API. I realized this after reading the 'concurrency' discussion on the swift-evolution mailing-list, in particular the messages from Pierre Habouzit (who is the libdispatch maintainer at Apple) are quite enlightening (and you can also find many tweets from him on the subject).

My take-aways are:

  • You should have very few queues that target the global pool. If all these queues are active at once, you will get as many threads running. These queues should be seen as execution contexts in the program (gui, storage, background work, ...) that benefit from executing in parallel.

  • Go serial first, and as you find performance bottle necks, measure why, and if concurrency helps, apply with care, always validating under system pressure. Reuse queues by default and add more if there's some measurable benefit to it. In most apps, you probably need less than 5 of these queues.

  • Queues that target other queues are fine, these are the ones which scale.

  • Don't use dispatch_get_global_queue(). It doesn't play nice with qos/priorities and can lead to thread explosion. Run your code on one of your execution context instead.

  • dispatch_async() is wasteful if the dispatched block is small (< 1ms), as it will most likely require a new thread due to libdispatch's overcommit behavior. Prefer locking to protect shared state (rather than switching the execution context).

  • If running concurrently, your work items need not to contend, else your performance sinks dramatically. Contention takes many forms. Locks are obvious, but it really means use of shared resources that can be a bottle neck: IPC/daemons, malloc (lock), shared memory, ...

  • Some classes/libraries are better designed as reusing the execution context from their callers/clients. That means using traditional locking for thread-safety. os_unfair_lock is usually the fastest lock on the system (nicer with priorities, less context switches).

  • You don't need to be async all the way to avoid thread explosion. Using a limited number of bottom queues and not using dispatch_get_global_queue() is a better fix.

  • The complexity (and bugs) of heavy async/callback designs also cannot be ignored. Synchronous code remains much easier to read, write and maintain.

  • Concurrent queues are not as optimized as serial queues. Use them if you measure a performance improvement, otherwise it's likely premature optimization.

  • libdispatch is efficient but not magic. Resources are not infinite. You cannot ignore the reality of the underlying operating system and hardware you're running on. Not all code is prone to parallelization.

  • Measure the real-world performance of your product to make sure you are actually making it faster and not slower. Be very careful with micro benchmarks (they hide cache effects and keep thread pools hot), you should always have a macro benchmark to validate what you're doing.

Look at all the dispatch_async() calls in your code and ask yourself whether the work you're dispatching is worth switching to a different execution context. Most of the time, locking is probably the better choice.

Once you start to have well defined queues (execution contexts) and to reuse them, you may run into deadlocks if you dispatch_sync() to them. This usually happens when queues are used for thread-safety, again the solution is locking instead and using dispatch_async() only when you need to switch to another execution context.

I've personally seen massive performance improvements by following these recommandations (on a high throughput program). It's a new way of doing things but it's worth it.



This comment has been minimized.

Copy link

snej commented Apr 27, 2018

dispatch_async() is wasteful if the dispatched block is small

Yes. Another reason, besides the one you gave, is that it always has to copy the block to the heap. That involves calling malloc (and incurring a future call to free), which is way, way more expensive than taking a lock or calling dispatch_sync.

The concurrency pattern you're outlining here — a few queues with well-defined purposes that are invoked through dispatch_async — is very much like the Actor model. I've had very good results in my current project by building some base (C++) classes that implement this model on top of libdispatch. Each Actor object owns a dispatch queue. The methods that do the real work are all private, but each one has a corresponding public method that simply calls dispatch_async to delegate to the private method.


This comment has been minimized.

Copy link

MadCoder commented Apr 27, 2018

Each Actor object owns a dispatch queue. The methods that do the real work are all private, but each one has a corresponding public method that simply calls dispatch_async to delegate to the private method.

If you do that you need to either have very few such actors, or target their internal queues to a shared one, else you created too much concurrency which goes exactly against what @tclementdev explains above ;)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.