Microsoft Agent Framework Microsoft.Extensions.AI Created: 27 Feb 2026 Updated: 27 Feb 2026

IChatClient: ChatOptions

Every call to GetResponseAsync or GetStreamingResponseAsync accepts an optional ChatOptions parameter. It lets you control model behaviour per-request — or apply defaults globally using the ConfigureOptions builder step.

Key Concepts

1. Temperature

Controls randomness. Lower values (near 0) produce focused, deterministic output. Higher values (near 2) produce more creative and varied output:

// Predictable
var response = await client.GetResponseAsync(prompt, new ChatOptions { Temperature = 0.1f });

// Creative
var response = await client.GetResponseAsync(prompt, new ChatOptions { Temperature = 1.5f });

2. MaxOutputTokens

Limits the length of the model's reply:

var response = await client.GetResponseAsync(
"Describe the history of computing.",
new ChatOptions { MaxOutputTokens = 15 });

3. ConfigureOptions (Global Defaults)

Use ChatClientBuilder.ConfigureOptions to set default option values for every request made through the client. The delegate only fills in values that are not already set (using ??=), so per-request options still win:

IChatClient client = new ChatClientBuilder(
new OpenAIClient(apiKey).GetChatClient("gpt-4o-mini").AsIChatClient())
.ConfigureOptions(o => o.Temperature ??= 0.2f)
.Build();

// Uses Temperature = 0.2 (from default)
await client.GetResponseAsync("Name the capital of France.");

// Uses Temperature = 1.0 (per-request overrides default)
await client.GetResponseAsync("Write a poem.", new ChatOptions { Temperature = 1.0f });

Full Example

using Microsoft.Extensions.AI;
using OpenAI;

namespace MicrosoftAgentFrameworkLesson.ConsoleApp.ChatClient;

/// <summary>
/// Demonstrates ChatOptions: Temperature, MaxOutputTokens, and ConfigureOptions.
/// Scenario: Marketing slogan generator showing how options affect output.
/// </summary>
public static class ChatOptionsDemo
{
public static async Task RunAsync()
{
var apiKey = Environment.GetEnvironmentVariable("OPEN_AI_KEY")
?? throw new InvalidOperationException("Set OPEN_AI_KEY environment variable.");

IChatClient client = new OpenAIClient(apiKey)
.GetChatClient("gpt-4o-mini")
.AsIChatClient();

Console.WriteLine("====== IChatClient — ChatOptions ======\n");

const string prompt = "Write a one-sentence slogan for a fitness app.";

// Demo 1: Low temperature — focused, predictable output
Console.WriteLine("--- Demo 1: Temperature 0.1 (predictable) ---");
var r1 = await client.GetResponseAsync(prompt, new ChatOptions { Temperature = 0.1f });
Console.WriteLine(r1.Text);
Console.WriteLine();

// Demo 2: High temperature — creative, varied output
Console.WriteLine("--- Demo 2: Temperature 1.5 (creative) ---");
var r2 = await client.GetResponseAsync(prompt, new ChatOptions { Temperature = 1.5f });
Console.WriteLine(r2.Text);
Console.WriteLine();

// Demo 3: MaxOutputTokens — force a very short answer
Console.WriteLine("--- Demo 3: MaxOutputTokens = 15 (truncated) ---");
var r3 = await client.GetResponseAsync(
"Describe the entire history of computing.",
new ChatOptions { MaxOutputTokens = 15 });
Console.WriteLine(r3.Text);
Console.WriteLine();

// Demo 4: ConfigureOptions via builder — default Temperature applied globally
Console.WriteLine("--- Demo 4: ConfigureOptions via Builder (default Temperature = 0.2) ---");
IChatClient configured = new ChatClientBuilder(
new OpenAIClient(apiKey).GetChatClient("gpt-4o-mini").AsIChatClient())
.ConfigureOptions(o => o.Temperature ??= 0.2f)
.Build();

var r4 = await configured.GetResponseAsync("Name the capital of France.");
Console.WriteLine(r4.Text);
Console.WriteLine();
}
}
Share this lesson: