Hosted MCP Tools with Microsoft Agent Framework
Lesson 9 – Hosted MCP Tools with Microsoft Agent Framework
This lesson shows how to connect an AI agent to a hosted MCP (Model Context Protocol) server and use the tools it exposes. Unlike local function tools that you implement yourself, hosted MCP tools are managed and executed by a remote server — you simply connect to the server's HTTP endpoint and the tools become available to your agent automatically.
Reference: MCP and Foundry Agents – Microsoft Learn
What is the Model Context Protocol (MCP)?
MCP is an open standard that lets AI agents discover and invoke tools hosted on external servers. Instead of bundling every tool inside your application, you point the agent at a server URL and it negotiates the available tools automatically. This pattern:
- Eliminates infrastructure management — the server owner handles hosting and updates.
- Enables secure, controlled access to external resources (documentation, APIs, code repositories).
- Makes tool libraries reusable across multiple agents and applications.
Key Types
| Type | Package | Purpose |
|---|---|---|
McpClient | ModelContextProtocol | Connects to an MCP server and provides access to its tools and resources. |
HttpClientTransport | ModelContextProtocol | HTTP transport layer that supports both SSE and Streamable-HTTP modes. |
HttpClientTransportOptions | ModelContextProtocol | Configures the endpoint URL, authentication headers, and transport mode. |
McpClientTool | ModelContextProtocol | Extends AIFunction — each tool from the MCP server is a ready-to-use AITool. |
AIAgent | Microsoft.Agents.AI | The agent that receives MCP tools and invokes them when the model requests it. |
AgentSession | Microsoft.Agents.AI | Preserves conversation history so multi-turn interactions build on each other. |
How It Works
- Create an
McpClientby pointingHttpClientTransportat the hosted server's URL. For servers that require authentication, add anAuthorizationheader viaHttpClientTransportOptions.AdditionalHeaders. - List tools with
mcpClient.ListToolsAsync(). Each returnedMcpClientToolalready implementsAITool, so it can be passed directly to the agent. - Build the agent using
AsIChatClient().AsAIAgent(tools: ...). The framework inserts aFunctionInvokingChatClientautomatically, so tool calls are executed and fed back to the model without any extra code. - Run queries via
agent.RunAsync(). Pass anAgentSessionto keep the conversation history alive across multiple turns.
Demo 1 – Microsoft Learn Documentation Search
The first demo connects to the Microsoft Learn MCP server (https://learn.microsoft.com/api/mcp). This server is publicly accessible — no authentication is required. It exposes a microsoft_docs_search tool that searches the official Microsoft documentation and returns relevant articles with source URLs.
The agent is configured to use only this specific tool to keep its scope focused on documentation queries.
Key design choices:
- Filtering tools — The demo calls
ListToolsAsync()and then filters to a single tool by name. This mirrors theAllowedToolsconcept in the Azure AI Foundry example from the Microsoft documentation, but without requiring Azure infrastructure. - No Azure dependency — Authentication uses a plain OpenAI API key (
OPEN_AI_KEYenvironment variable). The MCP server itself is public.
Demo 2 – GitHub Multi-Turn Research Session
The second demo connects to the GitHub Copilot MCP server (https://api.githubcopilot.com/mcp/). This server requires authentication via a GitHub Personal Access Token (PAT), which is passed as an Authorization header.
The demo uses AgentSession to run three consecutive turns. Because the session keeps the full conversation history, each turn can refer to what was discovered in the previous one — no need to repeat context.
Key design choices:
- Authentication via
AdditionalHeaders— TheHttpClientTransportOptions.AdditionalHeadersdictionary is the clean way to attach any HTTP header (Bearer token, API key, custom header) to every request the transport makes to the MCP server. - All tools vs. filtered tools — Unlike Demo 1, here all tools from the server are passed to the agent. This gives the model full flexibility to choose the most appropriate GitHub operation for each user query.
- Session continuity — The same
AgentSessionis passed to everyRunAsynccall. The agent sees the full conversation history, so the third turn can synthesise findings from the first two without the user repeating them.
Required NuGet Packages
| Package | Version | Purpose |
|---|---|---|
| Microsoft.Agents.AI | 1.0.0-preview.* | AIAgent, AgentSession, ChatClientAgent |
| Microsoft.Extensions.AI.OpenAI | 10.x | AsIChatClient() extension for OpenAI clients |
| ModelContextProtocol | 0.9.0-preview.1 | McpClient, HttpClientTransport, McpClientTool |
Environment Variables
| Variable | Required by | How to obtain |
|---|---|---|
OPEN_AI_KEY | Both demos | OpenAI platform dashboard |
GITHUB_PAT | Demo 2 only | https://github.com/settings/tokens — minimum scope: public_repo |
Difference from the Azure AI Foundry Approach
The Microsoft documentation example uses PersistentAgentsClient and MCPToolDefinition from the Azure AI Foundry SDK. That approach creates the agent server-side in your Azure project and requires Azure credentials (DefaultAzureCredential).
This lesson achieves the same result — an agent connected to a hosted MCP server — using only:
- A standard OpenAI API key
- The open-source
ModelContextProtocolNuGet package - The
Microsoft.Agents.AIframework
The trade-off is that this approach requires the MCP server to be accessible over HTTP from your machine, whereas the Azure approach can proxy private servers through the Foundry service.