Streaming
Provider-level streaming
Section titled “Provider-level streaming”Every provider implements ILanguageModel.StreamAsync(LlmRequest) and emits LlmStreamChunk items.
Use this when you want raw provider streaming without the higher-level agent loop.
Agent-level streaming
Section titled “Agent-level streaming”Set UseStreaming = true on Agent when you want streamed runtime events:
await agent.SendAsync("Say hello in 10 words");
await foreach (var evt in agent.ReceiveAsync()){ if (evt is TextEvent text && text.Partial) { Console.Write(text.Content); }}Event types
Section titled “Event types”Streaming runs through the same event model as non-streaming execution. Depending on provider and task you may receive:
- partial
TextEvent ThinkingEventToolUseEventToolResultEventImageEventResultEvent
When to use it
Section titled “When to use it”Use streaming for:
- interactive chat
- long-running analysis
- UI latency reduction
- portal or SignalR-backed execution views