Skip to main content
Glama

Implementing a Basic Strands Agent with MCP Servers

Written by on .

AWS
Strands Agents SDK
Agentic Ai
mcp

  1. 1. Set up your environment
    1. 2. Create an MCP server
      1. 3. Build a Strands agent that uses the MCP tool\[^2]
        1. 4. Run the full setup
          1. 5. What’s happening behind the scenes
            1. 6. Why this matters
              1. References

                In this hands-on guide, we'll walk through building a simple AI agent using the Strands Agents SDK1, integrated with an MCP (Model Context Protocol) tool. This example uses a local MCP server to demonstrate how Strands seamlessly connects with external tool endpoints.

                1. Set up your environment

                Begin by installing the SDK and related packages:

                pip install strands-agents strands-agents-tools strands-agents-builder pip install mcp-client

                Make sure your Python version is 3.9 or higher.

                2. Create an MCP server

                The server exposes simple tools through MCP over HTTP. Below is a minimalist example using FastMCP:

                # mcp_server.py from mcp.server.fastmcp import FastMCP mcp = FastMCP("simple-server", stateless_http=True, host="0.0.0.0", port=8002) @mcp.tool() def get_greeting(name: str) -> str: return f"Hello, {name}!" if __name__ == "__main__": mcp.run(transport="streamable-http")

                Run this script with:

                python mcp_server.py

                This registers a single tool—get_greeting—accessible via HTTP MCP interaction.

                3. Build a Strands agent that uses the MCP tool2

                Create a Python script agent_with_mcp.py:

                from strands import Agent from strands_agents.builder import MCPClient, streamablehttp_client MCP_URL = "http://localhost:8002/mcp/" mcp_client = MCPClient(lambda: streamablehttp_client(MCP_URL)) agent = Agent(model="bedrock:nova", tools=mcp_client.tools) response = agent("Please greet Alice using the greeting tool.") print(response)

                This agent3:

                • Connects to the MCP server for tool metadata,
                • Sends the user prompt to the LLM,
                • LLM selects and invokes get_greeting, and
                • Returns the result.

                This pattern demonstrates how Strands can dynamically discover and use MCP tools for reasoning and task execution.

                4. Run the full setup

                1. Start the MCP server:

                  python mcp_server.py
                2. Run the agent script:

                  python agent_with_mcp.py

                Expected output:

                Hello, Alice!

                This confirms the agent successfully called the remote tool and integrated the result into its response.

                5. What’s happening behind the scenes

                When the agent runs, it follows a defined internal workflow powered by MCP and Strands:

                1. Tool Discovery: At startup, the Strands agent queries the MCP server for available tools, fetching their metadata—names, parameter schemas, and usage descriptions 1.
                2. User Input Parsing: The user’s request is sent to the LLM, which interprets the intent and chooses the right tool (e.g., get_greeting) based on its metadata.
                3. MCP Tool Invocation: The agent uses the MCP client to send a structured tools/call request to the server. This is a JSON-RPC call carrying function name and arguments 2.
                4. Tool Execution & Response: The MCP server runs the function (e.g., the Python get_greeting tool), and returns a typed, structured result.
                5. Agentic Reflection: The agent injects the tool's output into the LLM's context. The model then incorporates this result into its next reasoning step and generates the final response.

                This flow illustrates how Strands combines runtime tool discovery, standardized MCP communication, and model-first reasoning to execute user requests without hardcoded logic—making the system flexible and maintainable.

                6. Why this matters

                • Decoupled architecture: You can update the tool logic independently by modifying the MCP server.
                • Tool discovery: Agents automatically consume tools advertised by MCP—no manual registration needed.
                • Model-driven flow: The agentic loop takes care of interpreting user intent, invoking tools, and generating responses.

                References

                Footnotes

                1. AWS blog: "Open Protocols for Agent Interoperability Part 3: Strands Agents & MCP" 2

                2. Strands Agents documentation – MCP examples 2

                3. DEV tutorial: "Agent with Local, Remote MCP Tools using AWS Strands Agents"

                Written by Om-Shree-0709 (@Om-Shree-0709)