Swift TaskGroup and AsyncStream: The Complete Guide to Structured Concurrency Patterns

Master Swift TaskGroup for parallel execution and AsyncStream for asynchronous data flows. Covers every task group variant, the sliding window throttle pattern, delegate bridging with streams, and real-world SwiftUI integration — with working code for Swift 6.2.

Why Structured Concurrency Patterns Matter

If you've already adopted async/await and actors in your Swift code, you're off to a solid start with concurrency. But there are two powerful primitives that still trip up a lot of developers: TaskGroup for running a dynamic number of parallel tasks, and AsyncStream for turning callback-based or delegate-driven APIs into clean asynchronous sequences. Together, they unlock patterns that async let simply can't handle.

I've seen teams adopt Swift concurrency only to hit a wall the moment they need to download 200 images in parallel or bridge a stubborn delegate API. This guide walks through every variant of task groups and async streams, covers real-world SwiftUI integration, and shows you how to throttle concurrency, handle errors, and dodge the most common pitfalls — all updated for Swift 6.2 and iOS 26.

Understanding Structured Concurrency

Swift's structured concurrency model ties the lifetime of every child task to its parent scope. When that parent scope exits, every child task is automatically cancelled and awaited. This guarantee prevents resource leaks and makes concurrent code dramatically easier to reason about compared to unstructured alternatives like Task.detached.

Here are the key tools in the structured concurrency toolbox:

  • async let — run a fixed, known number of tasks in parallel.
  • TaskGroup — run a dynamic number of tasks in parallel and collect their results.
  • DiscardingTaskGroup — run long-lived fire-and-forget tasks with automatic memory cleanup.
  • AsyncStream — bridge callback or delegate APIs into an AsyncSequence you can for await over.

They all share the same core principle: child work cannot outlive its parent, so cleanup is always automatic. Honestly, once you internalize this rule, everything else clicks into place.

TaskGroup Fundamentals

Use withTaskGroup when you need to spawn a variable number of concurrent tasks and collect their results. Unlike async let, where the number of parallel operations is fixed at compile time, a task group lets you add tasks inside loops, conditionals, or pretty much any runtime logic you can think of.

Basic Usage

func fetchAllUsers(ids: [Int]) async -> [User] {
    await withTaskGroup(of: User?.self) { group in
        for id in ids {
            group.addTask {
                try? await APIClient.fetchUser(id: id)
            }
        }

        var users: [User] = []
        for await user in group {
            if let user {
                users.append(user)
            }
        }
        return users
    }
}

A few things worth noting here:

  • The of: parameter tells the compiler what each child task returns. In Swift 6.1 and later, the compiler can often infer this, so you can omit it.
  • TaskGroup conforms to AsyncSequence, meaning you iterate over results with for await. This iteration happens sequentially, so it's safe to mutate local state like the users array.
  • The group only returns once every child task has completed. No child can escape the closure.

Preserving Order

Task group results arrive in completion order, not submission order. If you need results in the original order, pair each task with an index:

func fetchImagesInOrder(urls: [URL]) async -> [UIImage?] {
    await withTaskGroup(of: (Int, UIImage?).self) { group in
        for (index, url) in urls.enumerated() {
            group.addTask {
                let image = try? await ImageLoader.load(from: url)
                return (index, image)
            }
        }

        var results = [UIImage?](repeating: nil, count: urls.count)
        for await (index, image) in group {
            results[index] = image
        }
        return results
    }
}

This pattern costs almost nothing and prevents those subtle bugs that creep in when display order matters.

ThrowingTaskGroup: Handling Errors in Parallel Tasks

When your tasks can fail, switch to withThrowingTaskGroup. The API is nearly identical, but both individual tasks and the group itself can throw.

func fetchAllProducts(ids: [String]) async throws -> [Product] {
    try await withThrowingTaskGroup(of: Product.self) { group in
        for id in ids {
            group.addTask {
                try await APIClient.fetchProduct(id: id)
            }
        }

        var products: [Product] = []
        for try await product in group {
            products.append(product)
        }
        return products
    }
}

Error Propagation Rules

Understanding how errors propagate in a throwing task group saves you from some really confusing bugs. Here's how it works:

  1. If a child task throws, the error is stored until you call next() or iterate with for try await.
  2. When the error surfaces, the group automatically cancels all remaining child tasks.
  3. The group still awaits every child to finish before the error is re-thrown from withThrowingTaskGroup.

That third point catches people off guard — cancellation doesn't mean instant teardown.

If you want to tolerate individual failures instead of aborting everything, catch errors inside each child task:

try await withThrowingTaskGroup(of: Product?.self) { group in
    for id in ids {
        group.addTask {
            try? await APIClient.fetchProduct(id: id)
        }
    }

    var products: [Product] = []
    for try await product in group {
        if let product { products.append(product) }
    }
    return products
}

By using try? inside addTask, failures return nil instead of killing the entire group. Simple, but effective.

DiscardingTaskGroup: Fire-and-Forget Concurrency

Introduced in Swift 5.9, withDiscardingTaskGroup and withThrowingDiscardingTaskGroup solve a specific (and sneaky) memory problem with standard task groups. In a normal group, completed task results accumulate in memory until you consume them with next(). For long-running processes like servers or persistent listeners, this means unbounded memory growth.

Discarding task groups automatically destroy each child task as soon as it completes. They don't conform to AsyncSequence and have no next() method — you simply can't read results back.

func startServer() async throws {
    let listener = try await ServerListener(port: 8080)

    try await withThrowingDiscardingTaskGroup { group in
        for await connection in listener.connections {
            group.addTask {
                try await self.handleConnection(connection)
            }
        }
    }
}

Use discarding groups whenever your child tasks produce side effects (logging, writing to a database, sending a response) rather than return values. It's one of those APIs that you don't think about until your server's memory graph starts climbing in Instruments.

async let vs TaskGroup: When to Use Which

This might be the most frequently asked question about Swift concurrency, and honestly, the answer is pretty straightforward once you see it laid out:

Criteriaasync letTaskGroup
Number of tasks known at compile timeYesNo — dynamic
Tasks can return different typesYesNo — all same type
Need to loop over tasksNoYes
Control over concurrency levelNoYes (with throttle)
Syntax complexityMinimalModerate

Rule of thumb: if you can name every parallel task at the call site (like fetching a user profile and their avatar at the same time), use async let. If tasks come from an array, collection, or runtime condition, reach for TaskGroup.

// async let — fixed tasks, different return types
async let profile = fetchProfile(userId: id)
async let avatar = fetchAvatar(userId: id)
let (userProfile, userAvatar) = await (profile, avatar)

// TaskGroup — dynamic tasks, same return type
let thumbnails = await withTaskGroup(of: UIImage?.self) { group in
    for url in imageURLs {
        group.addTask { try? await loadThumbnail(from: url) }
    }
    return await group.reduce(into: []) { result, image in
        if let image { result.append(image) }
    }
}

Limiting Concurrency with the Sliding Window Pattern

Here's something that surprises a lot of developers: Swift's task group does not natively limit how many child tasks run at once. The cooperative thread pool keeps actual thread counts low, but each task still consumes memory for its stack and captured state. When you've got thousands of tasks (downloading thousands of images, for instance), you really should throttle.

The idiomatic approach is the sliding window: seed the group with N tasks, then add one new task each time an existing task completes.

func downloadImages(urls: [URL], maxConcurrent: Int = 6) async -> [UIImage] {
    await withTaskGroup(of: (Int, UIImage?).self) { group in
        var nextIndex = 0
        var results = [UIImage?](repeating: nil, count: urls.count)

        // Seed the initial batch
        for _ in 0..<min(maxConcurrent, urls.count) {
            let index = nextIndex
            group.addTask {
                let image = try? await ImageLoader.download(urls[index])
                return (index, image)
            }
            nextIndex += 1
        }

        // As each finishes, start the next
        for await (index, image) in group {
            results[index] = image

            if nextIndex < urls.count {
                let index = nextIndex
                group.addTask {
                    let image = try? await ImageLoader.download(urls[index])
                    return (index, image)
                }
                nextIndex += 1
            }
        }

        return results.compactMap { $0 }
    }
}

This keeps memory usage stable regardless of how large the input array is, while still maximizing throughput up to your concurrency cap. I use this pattern constantly in production apps.

AsyncStream Fundamentals

While TaskGroup handles parallel work that starts and finishes, AsyncStream handles values that arrive over time from an external source. It bridges callback-based APIs, delegates, timers, and notification observers into the async/await world.

If you've ever wished you could just for await over a delegate, this is how you do it.

Creating an AsyncStream

The core mechanism is a continuation. You yield values into the stream, and consumers read them with for await:

func countdownStream(from start: Int) -> AsyncStream<Int> {
    AsyncStream { continuation in
        for i in stride(from: start, through: 0, by: -1) {
            continuation.yield(i)
        }
        continuation.finish()
    }
}

// Usage
for await count in countdownStream(from: 5) {
    print(count) // 5, 4, 3, 2, 1, 0
}

Key rules to remember:

  • Call yield(_:) to emit a value.
  • Call finish() when the stream is done. Without this, for await loops will wait forever.
  • Use onTermination on the continuation to clean up resources when the consumer cancels.

Wrapping Delegates with AsyncStream

The most common real-world use of AsyncStream is converting delegate callbacks. Here's how to stream location updates from CLLocationManager:

class LocationStreamer: NSObject, CLLocationManagerDelegate {
    private let manager = CLLocationManager()
    private var continuation: AsyncStream<CLLocation>.Continuation?

    var locations: AsyncStream<CLLocation> {
        AsyncStream { [weak self] continuation in
            self?.continuation = continuation
            self?.manager.delegate = self
            self?.manager.startUpdatingLocation()

            continuation.onTermination = { _ in
                self?.manager.stopUpdatingLocation()
            }
        }
    }

    func locationManager(
        _ manager: CLLocationManager,
        didUpdateLocations locations: [CLLocation]
    ) {
        for location in locations {
            continuation?.yield(location)
        }
    }
}

Consumers simply write:

let streamer = LocationStreamer()
for await location in streamer.locations {
    print("Lat: \(location.coordinate.latitude)")
}

The onTermination handler ensures stopUpdatingLocation() is called when the consumer cancels or the view disappears — no manual cleanup required. That alone makes it worth the migration from raw delegates.

AsyncThrowingStream: Streams That Can Fail

When the data source can produce errors, use AsyncThrowingStream. The API is identical except the consumer must use for try await:

func downloadProgress(url: URL) -> AsyncThrowingStream<Double, Error> {
    AsyncThrowingStream { continuation in
        let task = URLSession.shared.downloadTask(with: url) { tempURL, response, error in
            if let error {
                continuation.finish(throwing: error)
                return
            }
            continuation.finish()
        }

        let observation = task.progress.observe(\.fractionCompleted) { progress, _ in
            continuation.yield(progress.fractionCompleted)
        }

        continuation.onTermination = { _ in
            observation.invalidate()
            task.cancel()
        }

        task.resume()
    }
}

This pattern cleanly handles three concerns at once: emitting progress values, propagating errors, and cleaning up observation and network resources on cancellation.

The Modern API: AsyncStream.makeStream()

Starting with Swift 5.9 (SE-0388), both AsyncStream and AsyncThrowingStream offer a makeStream() factory method that returns the stream and its continuation as a tuple. This is especially useful when you need to store the continuation as a property or pass it to a different method:

@Observable
class NotificationMonitor {
    private var continuation: AsyncStream<UNNotification>.Continuation?

    var notifications: AsyncStream<UNNotification>

    init() {
        let (stream, continuation) = AsyncStream<UNNotification>.makeStream()
        self.notifications = stream
        self.continuation = continuation
    }

    func received(_ notification: UNNotification) {
        continuation?.yield(notification)
    }

    deinit {
        continuation?.finish()
    }
}

Before makeStream(), storing the continuation required awkward workarounds like implicitly unwrapped optionals or an extra closure. So yeah, this is a welcome improvement.

Buffering Policies

When a producer yields values faster than the consumer can process them, the buffering policy determines what happens to the excess:

  • .unbounded — buffers every value (the default). Safe when the total number of values is bounded.
  • .bufferingOldest(N) — keeps the oldest N values and drops new ones if the buffer is full.
  • .bufferingNewest(N) — keeps the newest N values and drops old ones if the buffer is full.
// Keep only the 10 most recent sensor readings
let readings = AsyncStream<SensorReading>(bufferingPolicy: .bufferingNewest(10)) { continuation in
    sensorManager.onReading = { reading in
        continuation.yield(reading)
    }
}

For high-frequency data like sensor readings or frame updates, .bufferingNewest is typically the right choice — you want the latest state, not a queue of stale values building up behind the scenes.

Real-World SwiftUI Integration

Both TaskGroup and AsyncStream integrate really well with SwiftUI through the .task modifier, which automatically cancels work when the view disappears.

Parallel Image Loading with TaskGroup

struct PhotoGridView: View {
    let photoURLs: [URL]
    @State private var images: [URL: UIImage] = [:]

    var body: some View {
        ScrollView {
            LazyVGrid(columns: [GridItem(.adaptive(minimum: 100))]) {
                ForEach(photoURLs, id: \.self) { url in
                    Group {
                        if let image = images[url] {
                            Image(uiImage: image)
                                .resizable()
                                .aspectRatio(contentMode: .fill)
                        } else {
                            ProgressView()
                        }
                    }
                    .frame(width: 100, height: 100)
                    .clipped()
                }
            }
        }
        .task {
            await loadImages()
        }
    }

    private func loadImages() async {
        await withTaskGroup(of: (URL, UIImage?).self) { group in
            var nextIndex = 0
            let maxConcurrent = 4

            for _ in 0..<min(maxConcurrent, photoURLs.count) {
                let url = photoURLs[nextIndex]
                group.addTask { (url, try? await ImageLoader.load(from: url)) }
                nextIndex += 1
            }

            for await (url, image) in group {
                if let image {
                    images[url] = image
                }
                if nextIndex < photoURLs.count {
                    let url = photoURLs[nextIndex]
                    group.addTask { (url, try? await ImageLoader.load(from: url)) }
                    nextIndex += 1
                }
            }
        }
    }
}

The sliding window keeps at most 4 downloads active. As each image arrives, SwiftUI immediately re-renders the grid to show it — no loading spinner left hanging, no state management boilerplate needed.

Live Data with AsyncStream

struct StepCounterView: View {
    @State private var steps: Int = 0
    private let pedometer = CMPedometer()

    var body: some View {
        VStack {
            Text("\(steps)")
                .font(.system(size: 72, weight: .bold, design: .rounded))
            Text("Steps Today")
                .foregroundStyle(.secondary)
        }
        .task {
            for await count in pedometerStream() {
                steps = count
            }
        }
    }

    private func pedometerStream() -> AsyncStream<Int> {
        AsyncStream { continuation in
            pedometer.startUpdates(from: Calendar.current.startOfDay(for: .now)) { data, error in
                if let steps = data?.numberOfSteps.intValue {
                    continuation.yield(steps)
                }
            }
            continuation.onTermination = { _ in
                pedometer.stopUpdates()
            }
        }
    }
}

The .task modifier cancels the stream when the view disappears, which triggers onTermination and stops pedometer updates automatically. No onDisappear cleanup code needed — it just works.

Combining TaskGroup and AsyncStream

For more complex scenarios, you can use both together. Consider a batch processing system that processes items in parallel and reports progress as a stream:

func processBatch<T: Sendable>(
    items: [T],
    maxConcurrent: Int = 4,
    process: @Sendable @escaping (T) async throws -> Void
) -> AsyncThrowingStream<Double, Error> {
    AsyncThrowingStream { continuation in
        let task = Task {
            var completed = 0
            let total = items.count

            try await withThrowingTaskGroup(of: Void.self) { group in
                var nextIndex = 0

                for _ in 0..<min(maxConcurrent, items.count) {
                    let item = items[nextIndex]
                    group.addTask { try await process(item) }
                    nextIndex += 1
                }

                for try await _ in group {
                    completed += 1
                    continuation.yield(Double(completed) / Double(total))

                    if nextIndex < items.count {
                        let item = items[nextIndex]
                        group.addTask { try await process(item) }
                        nextIndex += 1
                    }
                }
            }
            continuation.finish()
        }

        continuation.onTermination = { _ in
            task.cancel()
        }
    }
}

The consumer gets a clean progress stream while the implementation handles throttled parallelism internally. This separation of concerns makes testing and reuse pretty straightforward — your UI just reads a Double from 0 to 1.

Common Pitfalls and How to Avoid Them

1. Forgetting to Call finish()

This one bites everyone at least once. If you never call continuation.finish(), the for await loop will suspend forever. Always make sure every code path — including error paths — calls finish().

2. Memory Leaks in ThrowingTaskGroup

If you add tasks in a loop inside withThrowingTaskGroup without ever calling next() or iterating with for try await, completed task results just pile up in memory. For long-running processes, use withThrowingDiscardingTaskGroup instead.

3. Capturing Self in AsyncStream

The AsyncStream closure can create retain cycles if it captures self strongly and the stream is stored as a property. Always use [weak self] in the build closure when the stream's lifetime is tied to an object. (This is the same retain cycle dance you're used to from closures — nothing new, but easy to forget in async contexts.)

4. Not Handling Cancellation

Task groups propagate cancellation, but child tasks need to actually check for it. Long-running tasks should periodically call try Task.checkCancellation() or check Task.isCancelled. For streams, always set onTermination to clean up underlying resources.

5. Assuming Execution Order

Task group results arrive in completion order, not submission order. If your code depends on ordering, use the index-pairing pattern shown earlier. I've seen this cause real bugs in production — images showing up in the wrong grid cells, for example.

Performance Tips

  • Use the sliding window pattern when spawning more than around 20 tasks to prevent excessive memory allocation from captured state in closures.
  • Prefer withDiscardingTaskGroup for tasks that produce side effects rather than return values — it has lower memory overhead.
  • Choose the right buffering policy for streams. The default .unbounded is fine for bounded data, but use .bufferingNewest for high-frequency sensor or UI data.
  • Avoid blocking the cooperative thread pool. If a child task does synchronous I/O or CPU-heavy work, wrap it with Task.detached or dispatch it to a custom executor so you don't starve other concurrent tasks.
  • Profile with Instruments. The Swift Concurrency instrument in Xcode shows task lifetimes, thread hops, and continuation suspensions — invaluable for tracking down bottlenecks in parallel code.

Frequently Asked Questions

What is the difference between async let and TaskGroup in Swift?

async let runs a fixed number of tasks in parallel where each task can return a different type. TaskGroup runs a dynamic number of tasks determined at runtime, but all tasks must return the same type. Use async let when you know exactly how many parallel operations you need at compile time, and TaskGroup when the count depends on runtime data like arrays or user input.

How do you limit the number of concurrent tasks in a TaskGroup?

Swift doesn't provide a built-in concurrency limit for task groups. The standard approach is the sliding window pattern: add N initial tasks, then inside the for await loop, add one new task each time an existing task completes. This keeps exactly N tasks in flight at all times.

When should you use AsyncStream instead of Combine?

AsyncStream is the natural choice when you're already using async/await throughout your codebase. It integrates directly with for await loops and SwiftUI's .task modifier. Combine still has its place when you need complex operator chains like debounce, combineLatest, or throttle, but for straightforward value-over-time scenarios, AsyncStream is simpler and doesn't require importing an additional framework.

How do you handle errors in AsyncStream?

Use AsyncThrowingStream instead of AsyncStream. Emit values with continuation.yield() and signal failure with continuation.finish(throwing: error). Consumers iterate with for try await and handle errors with standard do/catch blocks.

Can you use TaskGroup inside a SwiftUI view?

Yes, and it works great. Use the .task view modifier to launch a withTaskGroup call. SwiftUI will automatically cancel the task — and all its child tasks — when the view disappears. Update @State properties inside the for await loop to trigger UI re-renders as results arrive.

About the Author Editorial Team

Our team of expert writers and editors.