Microsoft Agent Framework: Workflow Builder & Execution
A Workflow ties executors and edges together into a directed graph and manages execution. It coordinates executor invocation, message routing, and event streaming. In this lesson we focus on how to build workflows with the WorkflowBuilder fluent API, how to execute them in streaming and non-streaming modes, and how the superstep execution model works.
Building Workflows
Workflows are constructed using the WorkflowBuilder class. The fluent API follows a simple pattern:
- Create a
WorkflowBuilderwith a start executor - Add edges between executors with
.AddEdge() - Mark which executors produce final output with
.WithOutputFrom() - Call
.Build()or.Build<T>()
Typed Workflows — Build<T>()
When you call .Build<T>() instead of .Build(), you get a Workflow<T> that specifies the expected input message type. This provides compile-time safety — you cannot accidentally send the wrong type:
Workflow Validation
The framework performs comprehensive validation when building workflows:
| Check | What it validates |
|---|---|
| Type Compatibility | Message types are compatible between connected executors |
| Graph Connectivity | All executors are reachable from the start executor |
| Executor Binding | All executors are properly bound and instantiated |
| Edge Validation | No duplicate edges or invalid connections |
If any validation fails, the framework throws an exception at build time — not at runtime — so you catch configuration errors early.
Workflow Execution
Workflows support two execution modes:
| Mode | API | Behaviour |
|---|---|---|
| Non-Streaming | InProcessExecution.RunAsync(workflow, input) | Waits for the workflow to complete, then returns a Run object with all events in run.OutgoingEvents |
| Streaming | InProcessExecution.StreamAsync(workflow, input) | Returns a StreamingRun immediately; events arrive in real-time via run.WatchStreamAsync() |
Non-Streaming Example
Streaming Example
Execution Model: Supersteps
The framework uses a modified Pregel / Bulk Synchronous Parallel (BSP) execution model. Execution is organised into discrete supersteps. Each superstep:
- Collects all pending messages from the previous superstep
- Routes messages to target executors based on edge definitions
- Runs all target executors concurrently within the superstep
- Waits for all executors to complete (synchronisation barrier)
- Queues any new messages emitted by executors for the next superstep
Synchronisation Barrier
The most important characteristic is the synchronisation barrier between supersteps. All triggered executors within one superstep run in parallel, but the workflow does not advance to the next superstep until every executor completes. This means a fast path cannot "get ahead" of a slow path in the same superstep.
Why Supersteps?
| Benefit | Explanation |
|---|---|
| Deterministic execution | Same input always produces the same execution order |
| Reliable checkpointing | State can be saved at superstep boundaries for fault tolerance |
| Simpler reasoning | No race conditions between supersteps; each sees a consistent view of messages |
Working with the Superstep Model
If you need truly independent parallel paths that don't block each other, consolidate sequential steps into a single executor. Instead of chaining step1 → step2 → step3, combine that logic into one executor so both parallel paths execute within a single superstep.
Demo Scenario: Student Exam Grading
Our demo simulates a student exam grading pipeline:
The demo includes three sub-demos:
- Demo 1 — Linear Workflow: Build and run a simple three-executor pipeline
- Demo 2 — Non-Streaming Events: Show all event types after workflow completes
- Demo 3 — Superstep Fan-Out: Grader fans out to Reviewer + Archivist in the same superstep, then Certificate runs in the next superstep