What Are AI Agents? Reasoning, Tools, and Context
An AI agent is a software system that uses a large language model to accomplish an objective. Where a plain chatbot only produces text, an agent decides what to do next, calls external systems to actually do it, and remembers what has happened so far. These three abilities are what separate an agent from a simple prompt–response wrapper around a model.
This article walks through the three pillars that every agent combines — reasoning & decision-making, tool usage, and context awareness — and shows a .NET console demo that illustrates each pillar with a small, dependency-free simulation.
An Agent Is a System That Accomplishes Objectives
The simplest definition: an agent takes a goal in natural language, figures out the steps, executes those steps, and returns a result. The goal can be anything from “summarize this PDF” to “reconcile last month’s invoices.” What makes the system an agent rather than a script is that it does not follow a fixed procedure — it plans the procedure using an LLM.
Internally, an agent runs a loop: look at the current state, decide on the next action, perform it, observe the result, and decide again. The loop ends when the objective is satisfied. Microsoft Agent Framework calls this the default agent runtime execution model.
Pillar 1 — Reasoning & Decision-Making
The “brain” of the agent is the LLM. Given the objective, the conversation so far, and the list of available tools, the model emits a plan: which tool to call next, with which arguments, or whether the objective is already complete. This is the step where the agent chooses what to do.
Typical inputs to reasoning:
- System instructions that define scope and style
- The user’s objective in natural language
- Tool descriptions — name, purpose, parameter schema
- Current context — past messages, retrieved knowledge, prior tool results
The model’s output is either a final answer or a structured tool call. When the reasoning produces a tool call, control moves to pillar 2.
Pillar 2 — Tool Usage
A tool is anything the agent can invoke to affect the outside world or fetch fresh information: a C# method, an HTTP API, an MCP server, a database query, a code-execution sandbox. Tools are what let the agent move beyond the model’s training data and take real action.
Common categories of tools:
- MCP servers (Model Context Protocol) — a standard way to expose tools to any agent
- Code execution — the agent runs code it wrote to answer a question
- External APIs — REST, GraphQL, or SDK calls to business systems
- Local functions — any method you register with the agent at build time
Tool descriptions live alongside the system prompt, so the model sees every available tool every turn and can pick among them. Good descriptions matter: when too many tools share similar purposes, the model picks poorly. In .NET, a tool is just a function — the framework generates its JSON schema from the method signature.
Pillar 3 — Context Awareness
Reasoning plus tools would already beat a chatbot, but without memory every turn starts from zero. Context awareness is what lets an agent answer follow-up questions, reuse earlier results, and draw on knowledge that does not fit in a single prompt.
The three main sources of context:
- Chat history & threads — prior user and assistant messages, managed by the agent session
- Vector stores & enterprise data — documents retrieved by semantic similarity (the core of RAG)
- Knowledge graphs — structured relationships the agent can traverse for precise lookups
In Microsoft Agent Framework, context providers inject this information automatically before each model call, so the agent always sees the latest relevant slice. Tool results from earlier turns are also folded into the next reasoning step, which is how the agent “learns” within a session.
Why the Three Pillars Work Together
Each pillar on its own is limited. Reasoning without tools can only describe what it would do. Tools without reasoning must be wired together by hand. Context without either is just a search box. Combine all three and the agent becomes something meaningfully new: a system that operates autonomously (chooses its own steps), adaptively (reacts to new information mid-task), and intelligently (grounds decisions in prior state and retrieved knowledge).
That is also why agents introduce nondeterminism that traditional software does not have. The same objective can produce different tool sequences on different runs, which means testing, observability, and guardrails matter more, not less, than for deterministic code.
A .NET Demo of the Three Pillars
The demo below is a small antique-restoration workshop agent. It does not call a real LLM — instead, each pillar is implemented with plain C# so you can trace the behavior step by step. The domain is deliberately unfamiliar so the shape of an agent stands out more than the specifics of any one API.
The demo runs three sequential objectives:
- Appraise and restore a Victorian pocket watch
- Estimate restoration for an Art Deco jewelry box
- Revalue the pocket watch after polishing — this one hits the context cache
Watch how the third objective reuses the prior appraisal instead of calling the price tool fresh.
Full Example
From Simulation to Real Agent
To turn this skeleton into a Microsoft Agent Framework agent, three things change:
- The
Reasonmethod is replaced by a call to an LLM (for example,client.AsAIAgent(...)with Microsoft Foundry, Azure OpenAI, or OpenAI) - The tool methods stay almost unchanged — you register them with the agent and the framework handles schema generation and invocation
- The dictionary-based memory is replaced by an agent session (for chat history) and optionally a context provider backed by a vector store or knowledge graph
The overall shape — plan, call tools, remember — is the same. That is the mental model the rest of this tutorial series builds on.
Reference
Microsoft Agent Framework overview — Microsoft Learn
What is Microsoft Foundry Agent Service? — Microsoft Learn
AI agent adoption — Microsoft Learn