A Deep Dive into PromptTemplateConfig
In modern AI development, hardcoding prompts directly into your application logic is a recipe for maintenance nightmares. To build scalable, professional-grade applications with Semantic Kernel, developers use the PromptTemplateConfig.
Think of PromptTemplateConfig as a blueprint. It doesn't just hold the text of your prompt; it encapsulates the metadata, the variables, and the specific model configurations (like temperature and token limits) required to make that prompt successful across different environments.
Why Use a Template Configuration?
Using a structured configuration object offers several distinct advantages:
- Decoupling: Separate your prompt logic from your C# or Python code.
- Portability: Easily move prompts between different models (e.g., moving from OpenAI to Azure OpenAI).
- Validation: Define required variables and default values to prevent runtime errors.
- Flexibility: Assign different "Execution Settings" for different models within the same template.
Practical Implementation: The Smart Recipe Assistant
Instead of a simple command, let’s look at a more complex scenario: a Smart Recipe Assistant. This assistant takes a list of ingredients and dietary restrictions to generate a personalized recipe.
Listing 4.6: Comprehensive PromptTemplateConfig Example
var recipePromptConfig = new PromptTemplateConfig
{
Name = "PersonalizedRecipeGenerator",
Description = "Generates a gourmet recipe based on available ingredients and dietary needs.",
// The Template uses {{ $variable }} syntax for dynamic content
Template = """
System: You are a world-class chef specialized in {{ $cuisine }} cooking.
Current Date: {{ time.now }}
User: I have the following ingredients: {{ $ingredients }}.
Please suggest a recipe.
Important Constraints: {{ $restrictions }}.
Format the response with a 'Title', 'Prep Time', and 'Instructions' list.
""",
TemplateFormat = "semantic-kernel",
// Defining Input Variables with metadata for better validation
InputVariables = [
new() {
Name = "ingredients",
Description = "List of food items available in the kitchen.",
IsRequired = true
},
new() {
Name = "cuisine",
Description = "The style of cooking (e.g., Italian, Thai, Mexican).",
Default = "Mediterranean",
IsRequired = false
},
new() {
Name = "restrictions",
Description = "Allergies or dietary preferences (e.g., Vegan, Gluten-free).",
Default = "None",
IsRequired = false
}
],
// Multi-model Execution Settings
ExecutionSettings = new()
{
// Default settings if no specific model is targeted
["default"] = new OpenAIPromptExecutionSettings
{
MaxTokens = 500,
Temperature = 0.7
},
// Optimized settings for high-reasoning models (GPT-4o)
["openai-gpt4o"] = new OpenAIPromptExecutionSettings
{
MaxTokens = 2000,
Temperature = 0.5,
PresencePenalty = 0.1,
FrequencyPenalty = 0.1
},
// Specialized settings for cost-efficient Azure deployments
["azure-gpt-35-turbo"] = new AzureOpenAIPromptExecutionSettings
{
MaxTokens = 800,
Temperature = 0.3
}
}
};
Breaking Down the Configuration Properties
1. Template & TemplateFormat
The Template is the heart of the config. It supports the Semantic Kernel template language, allowing you to inject variables using {{ $variableName }} and even call registered kernel functions directly (like {{ time.now }}). The TemplateFormat identifies which parser should handle this text.
2. InputVariables
This list acts as the "API Contract" for your prompt.
- Name: The identifier used in the template.
- IsRequired: If
true, the kernel will throw an error if this value is missing during execution. - Default: Provides a fallback value, making your prompt more resilient.
3. ExecutionSettings
This is a dictionary that allows for Model Polymorphism. You can define how the prompt should behave depending on which AI service is being called.
| Setting Key | Purpose | Typical Use Case |
"default" | Fallback configuration. | Used when a specific model key isn't found. |
"openai-..." | Specific to OpenAI models. | High creativity tasks (Temperature > 0.8). |
"azure-..." | Specific to Azure OpenAI. | Enterprise tasks requiring strict token limits. |
Note: Notice that ExecutionSettings can hold different classes like OpenAIPromptExecutionSettings or AzureOpenAIPromptExecutionSettings, giving you access to provider-specific features like Logprobs or User IDs.
Conclusion
The PromptTemplateConfig transforms a simple string into a robust, manageable asset. By defining variables and execution parameters upfront, you ensure that your AI car—or in our case, our AI Chef—operates with the exact precision and context needed for the task.
Example
// See https://aka.ms/new-console-template for more information
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.Connectors.OpenAI;
var apiKey = Environment.GetEnvironmentVariable("OPEN_AI_KEY");
if (string.IsNullOrEmpty(apiKey))
{
Console.WriteLine("Please set the OPEN_AI_KEY environment variable.");
return;
}
var kernel = Kernel.CreateBuilder()
.AddOpenAIChatCompletion(
"gpt-4o",
apiKey)
.Build();
// Add a time plugin for {{ time.now }}
kernel.ImportPluginFromFunctions("time",
[
KernelFunctionFactory.CreateFromMethod(() => DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss"), "now", "Gets current date and time")
]);
Console.WriteLine("=== Smart Recipe Assistant with PromptTemplateConfig ===\n");
// Define the PromptTemplateConfig
var recipePromptConfig = new PromptTemplateConfig
{
Name = "PersonalizedRecipeGenerator",
Description = "Generates a gourmet recipe based on available ingredients and dietary needs.",
// The Template uses {{ $variable }} syntax for dynamic content
Template = """
System: You are a world-class chef specialized in {{ $cuisine }} cooking.
Current Date: {{ time.now }}
User: I have the following ingredients: {{ $ingredients }}.
Please suggest a recipe.
Important Constraints: {{ $restrictions }}.
Format the response with a 'Title', 'Prep Time', and 'Instructions' list.
""",
TemplateFormat = "semantic-kernel",
// Defining Input Variables with metadata for better validation
InputVariables =
[
new() {
Name = "ingredients",
Description = "List of food items available in the kitchen.",
IsRequired = true
},
new() {
Name = "cuisine",
Description = "The style of cooking (e.g., Italian, Thai, Mexican).",
Default = "Mediterranean",
IsRequired = false
},
new() {
Name = "restrictions",
Description = "Allergies or dietary preferences (e.g., Vegan, Gluten-free).",
Default = "None",
IsRequired = false
}
],
// Multi-model Execution Settings
ExecutionSettings = new Dictionary<string, PromptExecutionSettings>
{
// Default settings if no specific model is targeted
["default"] = new OpenAIPromptExecutionSettings
{
MaxTokens = 500,
Temperature = 0.7
},
// Optimized settings for high-reasoning models (GPT-4o)
["openai-gpt4o"] = new OpenAIPromptExecutionSettings
{
MaxTokens = 2000,
Temperature = 0.5,
PresencePenalty = 0.1,
FrequencyPenalty = 0.1
}
}
};
// Create a KernelFunction from the PromptTemplateConfig
var recipeFunction = kernel.CreateFunctionFromPrompt(recipePromptConfig);
// Example 1: Using required variable only (with defaults for optional ones)
Console.WriteLine("--- Example 1: Basic Recipe (with defaults) ---\n");
var result1 = await kernel.InvokeAsync(
recipeFunction,
new KernelArguments
{
["ingredients"] = "chicken breast, tomatoes, garlic, olive oil"
}
);
Console.WriteLine(result1);
Console.WriteLine("\n" + new string('-', 80) + "\n");
// Example 2: Customizing cuisine and restrictions
Console.WriteLine("--- Example 2: Vegan Italian Recipe ---\n");
var result2 = await kernel.InvokeAsync(
recipeFunction,
new KernelArguments
{
["ingredients"] = "pasta, basil, tomatoes, pine nuts, garlic",
["cuisine"] = "Italian",
["restrictions"] = "Vegan, No dairy"
}
);
Console.WriteLine(result2);
Console.WriteLine("\n" + new string('-', 80) + "\n");
// Example 3: Using specific execution settings (high-quality mode)
Console.WriteLine("--- Example 3: Thai Recipe with High-Quality Settings ---\n");
var result3 = await kernel.InvokeAsync(
recipeFunction,
new KernelArguments(recipePromptConfig.ExecutionSettings["openai-gpt4o"])
{
["ingredients"] = "shrimp, coconut milk, lemongrass, red curry paste, bell peppers",
["cuisine"] = "Thai",
["restrictions"] = "Gluten-free"
}
);
Console.WriteLine(result3);
Console.WriteLine("\n" + new string('-', 80) + "\n");
// Example 4: Mexican Recipe with Specific Ingredients
Console.WriteLine("--- Example 4: Mexican Recipe ---\n");
var result4 = await kernel.InvokeAsync(
recipeFunction,
new KernelArguments
{
["ingredients"] = "beef, onions, black beans, tortillas, avocado, lime, cilantro",
["cuisine"] = "Mexican",
["restrictions"] = "Low-carb"
}
);
Console.WriteLine(result4);
Console.WriteLine("\n" + new string('-', 80) + "\n");
Console.WriteLine("\n=== Demo Complete ===");