Implementing a Basic Strands Agent with MCP Servers
Written by Om-Shree-0709 on .
- 1. Set up your environment
- 2. Create an MCP server
- 3. Build a Strands agent that uses the MCP tool\[^2]
- 4. Run the full setup
- 5. What’s happening behind the scenes
- 6. Why this matters
- References
In this hands-on guide, we'll walk through building a simple AI agent using the Strands Agents SDK1, integrated with an MCP (Model Context Protocol) tool. This example uses a local MCP server to demonstrate how Strands seamlessly connects with external tool endpoints.
1. Set up your environment
Begin by installing the SDK and related packages:
Make sure your Python version is 3.9 or higher.
2. Create an MCP server
The server exposes simple tools through MCP over HTTP. Below is a minimalist example using FastMCP
:
Run this script with:
This registers a single tool—get_greeting
—accessible via HTTP MCP interaction.
3. Build a Strands agent that uses the MCP tool2
Create a Python script agent_with_mcp.py
:
This agent3:
- Connects to the MCP server for tool metadata,
- Sends the user prompt to the LLM,
- LLM selects and invokes
get_greeting
, and - Returns the result.
This pattern demonstrates how Strands can dynamically discover and use MCP tools for reasoning and task execution.
4. Run the full setup
-
Start the MCP server:
-
Run the agent script:
Expected output:
This confirms the agent successfully called the remote tool and integrated the result into its response.
5. What’s happening behind the scenes
When the agent runs, it follows a defined internal workflow powered by MCP and Strands:
- Tool Discovery: At startup, the Strands agent queries the MCP server for available tools, fetching their metadata—names, parameter schemas, and usage descriptions 1.
- User Input Parsing: The user’s request is sent to the LLM, which interprets the intent and chooses the right tool (e.g.,
get_greeting
) based on its metadata. - MCP Tool Invocation: The agent uses the MCP client to send a structured
tools/call
request to the server. This is a JSON-RPC call carrying function name and arguments 2. - Tool Execution & Response: The MCP server runs the function (e.g., the Python
get_greeting
tool), and returns a typed, structured result. - Agentic Reflection: The agent injects the tool's output into the LLM's context. The model then incorporates this result into its next reasoning step and generates the final response.
This flow illustrates how Strands combines runtime tool discovery, standardized MCP communication, and model-first reasoning to execute user requests without hardcoded logic—making the system flexible and maintainable.
6. Why this matters
- Decoupled architecture: You can update the tool logic independently by modifying the MCP server.
- Tool discovery: Agents automatically consume tools advertised by MCP—no manual registration needed.
- Model-driven flow: The agentic loop takes care of interpreting user intent, invoking tools, and generating responses.
References
Footnotes
Written by Om-Shree-0709 (@Om-Shree-0709)