SW engineering, engineering management and the business of software

subscribe for more
stuff like this:

Grand Central Dispatch (GCD): Summary, Syntax & Best Practices

Queue and A

Apple originally described Grand Central Dispatch (GCD) this way:

  1. Threading is hard
  2. Using GCD makes it simple and fun

Both statements are correct; here are some additional points:

  • GCD is not a threading library or a wrapper around threads
  • GCD uses threads, but like sunshine, the developer never suffers direct exposure
  • GCD is a concurrency library implemented via FIFO queues
  • GCD is just the marketing name for libdispatch:#include <dispatch/dispatch.h>

Submitting Blocks to Queues

The primary mechanism of using GCD is by submitting blocks to queues or responding to events that pop out of queues. That’s it. There are different ways of submitting and many kinds of queues, some of them quite fancy. Ultimately, you are just scheduling tasks to be performed or performing tasks in response to events.

The magic part is that the concurrency aspect is handled for you. Thread management is automatic and tuned for system load. The usual concurrency dangers apply however: all UI must be done on the main queue and as always, check the documentation/googles to see if specific NS or UI bits are thread safe or not.

This post focuses on “submitting blocks to queues” but the buyer should be aware that libdispatch has more under the hood:

- Dispatch Groups        // coordinate groups of queues
- Semaphores             // traditional counting Semaphores
- Barriers               // synchronize tasks in a given concurrent queue
- Dispatch Sources       // event handling for low-level events
- Dispatch I/O           // file descriptor–based operations
- Dispatch Data Buffers  // memory-based data buffer

Creating or Getting Queues

It is worth repeating: the primary mechanism of using GCD is submitting tasks to queues.

The best way to conceptualize queues is to first realize that at the very low-level, there are only two types of queues: serial and concurrent.

Serial queues are monogamous, but uncommitted. If you give a bunch of tasks to each serial queue, it will run them one at a time, using only one thread at a time. The uncommitted aspect is that serial queues may switch to a different thread between tasks. Serial queues always wait for a task to finish before going to the next one. Thus tasks are completed in FIFO order. You can make as many serial queues as you need with dispatch_queue_create.

The main queue is a special serial queue. Unlike other serial queues, which are uncommitted, in that they are “dating” many threads but only one at time, the main queue is “married” to the main thread and all tasks are performed on it. Jobs on the main queue need to behave nicely with the runloop so that small operations don’t block the UI and other important bits. Like all serial queues, tasks are completed in FIFO order. You get it with dispatch_get_main_queue.

If serial queues are monogamous, then concurrent queues are promiscuous. They will submit tasks to any available thread or even make new threads depending on system load. They may perform multiple tasks simultaneously on different threads. It is important that tasks submitted to the global queue are thread-safe and minimize side effects. Tasks are submitted for execution in FIFO order, but order of completion is not guaranteed.

In Mac OS X 10.6 and iOS 4, there were only three, built-in (global) concurrent queues and you could not make them, you could only fetch them with dispatch_get_global_queue. As of Mac OS 10.7 and iOS 5, you can create them with dispatch_queue_create("label", DISPATCH_QUEUE_CONCURRENT). You cannot set the priority of concurrent queue you create yourself. In practice, it often makes more sense to use the global concurrent queue with the appropriate priority than to make your own.

The primary functions used to create or get queues are summarized here:

dispatch_queue_create       // create a serial or concurrent queue
dispatch_get_main_queue     // get the one and only main queue
dispatch_get_global_queue   // get one of the global concurrent queues
dispatch_get_current_queue  // DEPRECATED

dispatch_queue_get_label    // get the label of a given queue

A quick note on dispatch_get_current_queue: It is deprecated and it also didn’t always work in every case. If your implementation requires this, then your implementation should be refactored. The most common use case of this was to “run some block on whatever queue I am running on”. Refactored designed should pass an explicit target queue along with the block as arguments or parameters, rather than trying to rely on the runtime to determine which queue to submit to.

Adding Tasks to the Queues

Once you have queues of your very own, you can make them useful by adding tasks to them.

The primary mechanisms for do so are the following:

// Asynchronous functions
dispatch_async
dispatch_after
dispatch_apply
// Synchronous functions
dispatch_once
dispatch_sync

dispatch_async will submit a task to a queue and return. immediately. dispatch_after returns immediately, but delays until the specified time to submit the task. dispatch_apply also returns immediately and the task is submitted multiple times.

dispatch_sync will submit a task to a queue, and returns only when the task completes. dispatch_once will submits a task once and only once over the application lifetime, returns when the block completes.

In practice, I find myself using dispatch_async, dispatch_after and dispatch_once the most.

Example Code:

// add ui_update_block to the main queue
dispatch_async(dispatch_get_main_queue(), ui_update_block);

// add check_for_updates_block to some_queue in 2 seconds
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, 2 * NSEC_PER_SEC), some_queue, check_for_updates_block);

// add work_unit_block to some_queue i times.
dispatch_apply(i, some_queue, work_unit_block);

// perform the only_once_block once and only once. 
static dispatch_once_t onceToken = 0; // It is important this is static!  
// wait for completion
dispatch_once(&onceToken, only_once_block);

// add blocking_block to background_queue & wait for completion
dispatch_sync(background_queue, blocking_block);

Queue memory management

GCD first became available in Mac OS X 10.6 and iOS 4. At that time, GCD objects (queues, semaphores, barriers, etc.) were treated like CFObjects and required you to call dispatch_release and dispatch_retain according to the normal create rules.

As of Mac OS X 10.8 and iOS 6, GCD objects are managed by ARC and as such manual reference counting is explicitly disallowed.

Furthermore, under ARC the following caveats apply:

  1. If you are using a GCD object within blocks that are used by the GCD object, you may get retain cycles. Using __weak or explicitly destroying the object (via mechanisms such as dispatch_source_cancel) are good ways around this. As of Xcode 4.6, the static analyzer does NOT catch this. Example:

    // Create a GCD object:
    dispatch_queue_t someQueue = dispatch_queue_create("someQueue", nil);
    // put a block on the queue, the queue retains the block.
    dispatch_async(someQueue, ^{
        // capture the GCD object inside the block,
        // the block retains the queue and BAM! retain cycle!
        const char *label = dispatch_queue_get_label(someQueue);
        NSLog(@"%s", label);
    });
    
    // You can use the typical __weak dance to workaround:
    __weak dispatch_queue_t weakQueue = someQueue;
    dispatch_async(someQueue, ^{
        __strong dispatch_queue_t strongQueue = weakQueue;
        const char *label = dispatch_queue_get_label(strongQueue);
        NSLog(@"%s", label);
    });
    
  2. Lastly, this little nugget was buried in man dispatch_data_create_map. The GCD functions dispatch_data_create_map and dispatch_data_apply create internal objects and extra care must be taken when using them. If the parent GCD object is released, then the internal objects get blown away and bad things happen. The __strong variables or the objc_precise_lifetime on the parent dispatch_data_t can help keep the parent object alive.

    // dispatch_data_create_map returns a new GCD data object.
    // However, since we are not using it, the object is immediately
    // destroyed by ARC and our buffer is now a dangling pounter!
    dispatch_data_create_map(data, &danglingBuffer, &bufferLen);
    
    // By stashing the results in a __strong var, our buffer
    // is no longer dangerous.
    __strong dispatch_data_t newData = dispatch_data_create_map(data, &okBuffer, &bufferLen);
    

Queues In Practice

Queues, like most powerful tools, can cause bodily harm if used inappropriately. Real world usage requires some discipline. Here are some general guidelines:

  • Use of the main queue should be restricted to tasks that require the main thread and must be short to prevent locking up the UI.
  • Each created serial queue should have a purpose.
  • Each created serial queue should be named/labeled appropriate to that purpose.
  • Tasks performed on the concurrent queues must be thread-safe.

The second bullet above deserves further exploration. Because queues are lightweight, you can make lots and lots of them. It is better to have many specialized serial queues than to stuff many disconnected tasks into one or two “mega” serial/concurrent queues.

Typical “purposeful” queues look like this:

//used for importing into Core Data so we don't block the UI
dispatch_queue_create("com.yourcompany.CoreDataBackgroundQueue", NULL);

//used to prevent concurrent access to Somefile
dispatch_queue_create("com.yourcompany.SomeFile.ReadWriteQueue", NULL);

//used to perform long calculations in the the background
dispatch_queue_create("com.yourcompany.Component.BigLongCalculationQueue", NULL);

Practical queue usage typically involves nested dispatching:

dispatch_queue_t background_queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, NULL);
dispatch_async(background_queue, ^{
    // do some stuff that takes a long time here...

    // follow up with some stuff on the main queue
    dispatch_async(dispatch_get_main_queue(), ^{
        // Typically updating the UI on the main thread.
    });
});

Here we launch a long-running task on the background queue. When the task is complete, we finish up by triggering a UI update to be performed on the main queue.

Also be aware of excessively nested dispatching. It hampers readability & maintainability and should be considered a somewhat pungent code smell.

Advanced Studies

If you have particular interest on any of the more quiet corners of GCD (dispatch groups, semaphores, barriers, etc.), let me know and I’ll write something up.

In the mean time, the usual sources of knowledge apply, documentation available on the web and via Xcode as well as WWDC talks on GCD and blocks.



in lieu of comments, you should follow me on bluesky at @amattn.com and on twitch.tv at twitch.tv/amattn. I'm happy to chat about content here anytime.


the fine print:
aboutarchivemastodonblueskytwitchconsulting or speaking inquiries
© matt nunogawa 2010 - 2023 / all rights reserved