I have always been fascinated by investigating why technologies are born and what improvements emerge through their evolution. Understanding the historical context and the problems each generation of technology sought to solve provides invaluable insight into the design decisions we encounter today. By tracing back from Swift Concurrency to NSThread, we can appreciate not only the elegance of modern solutions but also the challenges that shaped them.

Since we are tracing back, let us begin with Swift Concurrency.

Swift Concurrency

Swift Concurrency represents a modern approach to concurrent programming, built directly into the Swift language.

According to the approachable concurrency document, Swift’s built-in support for concurrency has three primary goals:

Swift’s built-in support for concurrency has three goals:

  • Extend memory safety guarantees to low-level data races.
  • Maintain progressive disclosure for non-concurrent code, and make basic use of concurrency simple and easy.
  • Make advanced uses of concurrency to improve performance natural to accomplish and reason about. ― approachable-concurrency.md ―

The fundamental principle is that execution remains sequential unless explicitly specified with Task. By declaring sequential regions, we can demonstrate the absence of data races.

Rather than assuming “this might be concurrent” at every turn, Swift Concurrency aims for a language design that can prove the absence of data races by explicitly declaring “this is sequential execution”.

This approach includes several key features: isolation through Actor Isolation, safe value passing through Sendable, and structured concurrency with async/await.

Task Concurrency Manifesto

When discussing Swift Concurrency, we must also touch upon the Task Concurrency Manifesto.

This gist, now renamed to “Swift Concurrency Manifesto”, was written in 2017 by Chris Lattner, the creator of the Swift language. Whilst assuming coexistence with GCD, he considered abstracting the challenges of asynchronous processing and incorporating it as one of the language’s built-in supports.

At this point, he already mentioned Actor, aiming for a design that does not depend on shared mutable state rather than improving locks or atomic operations.

Three years later, heated discussions about await took place in the Swift Concurrency Roadmap posted on the Swift Forum. From this topic, we can glimpse Lattner’s language design philosophy.

Multiple I/O Processing with Swift Concurrency

To better appreciate the merits of Swift Concurrency, I have prepared a simple sample code.

Let us consider a scenario where we fetch posts from one’s own account, favourite posts, and reposted posts from a social networking service via the network. We then collect them into an arbitrary array.

try await withThrowingTaskGroup(of: [Post].self) { group in
  group.addTask { try await fetch(myURL) }
  group.addTask { try await fetch(favoriteURL) }
  group.addTask { try await fetch(repostURL) }
  for try await posts in group { collect(posts) }
}

With Swift Concurrency, we can express multiple independent I/O operations using TaskGroup. If an error occurs along the way, it is propagated via throw. There is no need for explicit locking to handle completion notifications for the three I/O operations.

The code is remarkably straightforward.

Now, how would this scenario be expressed using technologies that existed before Swift Concurrency? Let us compare them.

GCD: Grand Central Dispatch

Let us trace back to GCD, Grand Central Dispatch.

GCD was first introduced in Mac OS X 10.6 Snow Leopard (2009) and iOS 4 (2010).

From 2005 onwards, multi-core CPUs such as Intel Core Duo and PowerPC G5 dual-core began to proliferate. Performance hit a ceiling with “one thread per application”, and software needed to absorb the optimisation of CPU resources:

  • Calculating the optimal number of threads
  • Eliminating core usage imbalances

Thus, GCD was born as a software technology where developers simply “throw tasks into a queue”, and the OS optimises thread and CPU core allocation.

For details, please refer to WWDC 2009: Programming with Blocks and Grand Central Dispatch, and the open-source libdispatch.

Let us now rewrite the previous multiple I/O processing scenario using GCD.

let queue = DispatchQueue(label: "feed.gcd", attributes: .concurrent)
let group = DispatchGroup()
let lock  = NSLock()
var firstError: Error?

// GCD's async closures cannot propagate throw, so errors must be stored in a variable.
func fetch(_ url: URL) {
  group.enter()
  queue.async {
    defer { group.leave() }
    do {
      let posts = try requestSync(url)
      // GCD requires mutual exclusion for concurrent access → guard with lock
      lock.lock(); collect(posts); lock.unlock()
    } catch {
      // With GCD, one must hold errors oneself and check them later
      lock.lock(); if firstError == nil { firstError = error }; lock.unlock()
    }
  }
}

fetch(myURL)
fetch(favoriteURL)
fetch(repostURL)

// GCD waits for completion,
// then checks results and errors collectively.
// Check firstError and manually throw.
group.wait()

if let error = firstError {
  throw error
}

With GCD, we create a queue using DispatchQueue and prepare a DispatchGroup as a task group to monitor multiple I/O processing tasks. Unlike Swift Concurrency, throw cannot be propagated, so a variable to store errors is required. Additionally, mutual exclusion for concurrent access is necessary, so explicit locking is also required.

Completion notifications after calling the three tasks are received via DispatchGroup. Afterwards, errors are handled as necessary.

Whilst it is said that one simply throws tasks into a queue, when expressed in code, one must be mindful of locking and error handling.

Operation (NSOperation)

Next, let us compare with Operation.

It first appeared in Mac OS X 10.5 Leopard (2007) and iOS 2.0 (2008). It encapsulated tasks as Operation objects and provided higher-level concurrent processing APIs compared to Thread, which we shall discuss later.

When expressed in code, it looks like this:

let queue = OperationQueue()
// Set concurrency level
queue.maxConcurrentOperationCount = 3
let lock = NSLock()
var firstError: Error?

func fetch(_ url: URL) -> BlockOperation {
  BlockOperation {
    do {
      let posts = try requestSync(url)
      // Like GCD, explicit mutual exclusion locking is required.
      lock.lock(); collect(posts); lock.unlock()
    } catch {
      lock.lock(); if firstError == nil { firstError = error }; lock.unlock()
    }
  }
}

let operations = [fetch(myURL), fetch(favoriteURL), fetch(repostURL)]

// For asynchronous execution, use waitUntilFinished: false
queue.addOperations(operations, waitUntilFinished: true)

// Like GCD, manually check and handle errors after waiting.
if let error = firstError {
  throw error
}

Similar to GCD, we create a queue and set the concurrency level, with the lock and error variable as we saw earlier.

We use BlockOperation, which stores Block Objects—the Objective-C equivalent of Closures—as tasks. Once tasks are created, we simply add them to the queue and handle errors as necessary.

Thread (NSThread)

Let us trace back further.

Before GCD and Operation became widespread, Thread was the means for applications to create threads directly. As stated in the Foundation Framework documentation, if you want to run an Objective-C function in its own execution thread, use Thread.

Use this class when you want to have an Objective-C method run in its own thread of execution. ― https://developer.apple.com/documentation/foundation/thread ―

It is a Cocoa interface to POSIX threads, commonly known as Pthreads, and is designed with high affinity for Cocoa application conventions such as Run Loop and autoreleasepool.

When expressed in code, it looks like this:

// A class equivalent to Swift Concurrency's withThrowingTaskGroup or
// GCD's DispatchGroup.
// Responsible for launching multiple threads and
// waiting until all processing is complete.
final class ThreadJoiner {  // Condition variable used for inter-thread synchronisation
  private let cond = NSCondition()
  // Remaining thread count counter
  private var remaining: Int

  // Initialise with the number of threads to launch
  init(count: Int) { self.remaining = count }

  // Notify completion from the thread side
  func done() {
    cond.lock()
    remaining -= 1
    cond.signal() // Wake up one waiting thread
    cond.unlock()
  }

  // Block and wait until all threads finish
  func waitAll() {
    cond.lock()
    // Sleep until another thread signals
    while remaining > 0 { cond.wait() }
    cond.unlock()
  }
}

// Set concurrency level
let joiner = ThreadJoiner(count: 3)
let lock = NSLock()
var capturedError: Error?

func fetch(_ url: URL) {
  let t = Thread {
    defer { joiner.done() }
    do {
      let posts = try requestSync(url)
      // Explicit locking
      lock.lock(); collect(posts); lock.unlock()
    } catch {
      lock.lock(); if firstError == nil { firstError = error }; lock.unlock()
    }
  }
  t.start() // Launch thread
}

fetch(myURL)
fetch(favoriteURL)
fetch(repostURL)

// Synchronous blocking
joiner.waitAll()

if let error = firstError {
  throw error
}

What has changed significantly is that we must manage the responsibilities held by TaskGroup ourselves. In addition to the concurrency level of execution tasks described in Operation, we must also prepare locking mechanisms for multiple threads executing tasks. The rest is similar to GCD and Operation: we add locking and error handling.

One can feel how data race management is becoming increasingly complex.

Bonus: Pthreads

Having come this far, let us trace back to Pthreads as well.

Pthreads refers to what was standardised by IEEE in 1995, when it was defined as a suite of C language APIs. It is published as libpthread on Apple’s open-source page.

When expressed in code, it looks like this:

import Darwin

// Required to safely pass multiple values
// from Swift via C pointers.
final class Box {
  let url: URL
  let lock: NSLock
  init(url: URL, lock: NSLock) {
    self.url = url; self.lock = lock
  }
}

private var firstError: Error?

// Add @_cdecl to make the Swift function
// directly callable from pthread_create
@_cdecl("pthread_worker_entry")
func pthread_worker_entry(
  _ arg: UnsafeMutableRawPointer?
) -> UnsafeMutableRawPointer? {
  // Decrement the reference counter (release) of the Box passed with passRetained
  // using takeRetainedValue
  let box = Unmanaged<Box>.fromOpaque(arg!).takeRetainedValue()
  do {
    let posts = try requestSync(box.url)
    // Explicit locking
    box.lock.lock(); collect(posts); box.lock.unlock()
  } catch {
    box.lock.lock(); if firstError == nil { firstError = error }; box.lock.unlock()
  }
  return nil
}

var threads = [pthread_t?](repeating: nil, count: 3)
// pthread_attr_t: Specify stack size, scheduling policy,
// detach state, etc.
var attr = pthread_attr_t()
let lock = NSLock()
let urls = [myURL, favoriteURL, repostURL]

// Initialise thread attributes
pthread_attr_init(&attr)

for (i, url) in urls.enumerated() {
  // Retain Box and pass ownership to the thread
  // Release with takeRetainedValue on the thread side
  let box = Box(url: url, lock: lock)
  let unmanaged = Unmanaged.passRetained(box)
  let result = pthread_create(
    &threads[i],          // Store the created thread ID
    &attr,                // Thread attributes
    pthread_worker_entry, // C-compatible function pointer
    unmanaged.toOpaque()  // Arguments to pass to the thread: convert Box to pointer via Unmanaged
  )

  if result != 0 {
    // If thread creation fails, reclaim ownership and release
    unmanaged.release()
    lock.lock()
    if firstError == nil {
      // Need to create an error
      firstError = NSError(
        domain: "PThreadError",
        code: Int(result),
        userInfo: [
          NSLocalizedDescriptionKey: "pthread_create failed for \(url) (\(result))"
        ]
      )
    }
    lock.unlock()
  }
}

// Wait for all threads to complete
for t in threads {
  if let th = t { pthread_join(th, nil) }
}

// Destroy thread attributes
pthread_attr_destroy(&attr)

if let error = firstError {
  throw error
}

As I mentioned earlier, since Pthreads is a C language API, we need to exchange variables between C and Swift. We must use the Unmanaged structure and manage reference counter increments and decrements ourselves, as in the era of MRC rather than ARC.

One false step could lead to memory leaks or crashes.

And we also need to set thread attributes. In this case, we use default values, but one can specify stack size, scheduling policy, thread detach state, and so forth.

We create a thread using pthread_create for each post URL. If creation via pthread_create fails, we must add locking whilst creating and recording errors ourselves.

Once all threads have finished executing, we must destroy the thread attributes in addition to error handling.

It is complex, is it not? Indeed.

Let us return once more to the Swift Concurrency code.

try await withThrowingTaskGroup(of: [Post].self) { group in
  group.addTask { try await fetch(myURL) }
  group.addTask { try await fetch(favoriteURL) }
  group.addTask { try await fetch(repostURL) }
  for try await posts in group { collect(posts) }
}

How does it appear now? It looks simple, does it not?

Conclusion

Swift Concurrency still sees many updates, and keeping up is challenging. However, I would be pleased if, through this post, understanding its history has helped you appreciate the necessity of Swift Concurrency today.

Let us continue to share our knowledge and work with Swift Concurrency together.

I wrote other examples from a simple counter to a file downloader in the repository here. Hope you’ll enjoy :)