Simple, Sweet Locking Recipes

Any app of reasonable complexity has the need to synchronize and/or block thread execution until certain events have occurred.

A lot of existing info on the internets references old or incorrect ways of locking/synchronizing. At Rocket we have almost completely standardized on Grand Central Dispatch (GCD) to accomplish our synchronization/locking. This consistent approach keeps things simple and easy to reason about. Here are several specific recipes for problems that we recently worked on.

Recipe #1

This recipe is to handle the situation that you need to execute two asynchronous operations before performing a block of code.

We recently had this scenario for a Watch app we were building. On a scheduled basis we wanted to update information (the applicationContext) on the watch. In order to update the application context we needed to call two different server APIs, combine their responses and then update the application context.

To accomplish this we use dispatch_groups.

let fetchStepsAndFriends = dispatch_group_create()

// msg our dispatch group that it will need to wait
dispatch_group_enter(fetchStepsAndFriends)
FriendServices.sharedInstance.getMyFriends { (success, error) -> () in
   // signal that we are done
   dispatch_group_leave(fetchStepsAndFriends)
}
            
dispatch_group_enter(fetchStepsAndFriends)                StepsServices.sharedInstance.getUserSteps { (success, error) -> () in
    dispatch_group_leave(fetchStepsAndFriends)
}
            
// non-blocking wait of code to execute after api calls
// have finished
dispatch_group_notify(fetchStepsAndFriends, dispatch_get_main_queue(), { () -> () in
    self.updateApplicationContext()
})

Important: dispatch_group_notify is non-blocking to the current thread. The current thread will continue to execute code. Only after both dispatch_group_leave functions have been called will the dispatch_group_notify block be executed. If, however, you want to block the current thread from executing until both dispatch_group_leave functions have been called then you can use dispatch_group_wait instead.

Recipe #2

Sometimes you need the ability to lock access to a block of code and the locking and unlocking needs to happen across threads.

A recent example of ours was that we wanted to ensure that there is only a single running instance of an API call. In this case we want to lock access to other calls right before we kick off our network call (which probably is from the main thread) and then unlock after the call returns which will be on a background thread.

To accomplish this we use dispatch_semaphores. Here is an example of ensuring that only a single instance of an API call is running from the client at a time.

struct AudioServices {

    let audioFetchSemaphore = dispatch_semaphore_create(1)

    func fetchAudioList(completion: (podcasts: [Podcast]?, error: NSError?) -> ()) {
        // dispatch_semphore_wait will decrement our semaphore, and block the current thread from executing until our semaphore is at 0. (It is a negative number while blocking)
        dispatch_semaphore_wait(audioFetchSemaphore, DISPATCH_TIME_FOREVER)

        AudioServices.fetchList() {
           // signal will increment our counter
           dispatch_semaphore_signal(self.audioFetchSemaphore)
        }
    }
}

dispatch_semaphore_wait will block the current thread from executing further until the semaphore's value is at or greater than 0. Every call to dispatch_semaphore_wait will decrement the semaphore and a call to dispatch_semaphore_signal will increment it.

The fact that the value has to be equal to or greater than 0 is why we initialize the semaphore to 1. Otherwise, our initial call to dispatch_semaphore_wait would set our value to -1 and our thread would be blocked forever.

Recipe #3

Our final recipe is to handle the situation where you only want a single instance of a code block running at once. You could use dispatch_sempahores which we just saw. However, a more elegant solution is to a serial queue to dispatch to. The serial queue ensures that only a single instance of the submitted closure is running at a single time.

Whether the code block should be run synchronously from our current thread or asynchronously is orthogonal. The fact that a serial queue is specified ensures that only a single block can be run at once.

// for the type of queue always specify, DISPATCH_QUEUE_SERIAL, don't use 0 which is old and busted and very non-descriptive
let tokenQueue = dispatch_queue_create("com.rocketinsights.token", DISPATCH_QUEUE_SERIAL)

var token: String {
    set {
       dispatch_sync(tokenQueue, {
         cachedToken = token
       })
    }
    get {
       dispatch_sync(tokenQueue, {
         return cachedToken
       })
    }
}

Hopefully these recipes will keep you cooking and avoid a pie to the face!