- droidmind
- docs
# Comprehensive Guide to the Python MCP SDK
## Table of Contents
1. [Introduction to MCP](#introduction-to-mcp)
2. [Installation and Setup](#installation-and-setup)
3. [Server Architecture](#server-architecture)
4. [FastMCP API](#fastmcp-api)
5. [Resources](#resources)
6. [Tools](#tools)
7. [Prompts](#prompts)
8. [Context and Lifecycle](#context-and-lifecycle)
9. [Transport Protocols](#transport-protocols)
10. [Error Handling](#error-handling)
11. [Advanced Topics](#advanced-topics)
12. [Practical Examples](#practical-examples)
## Introduction to MCP
The Model Context Protocol (MCP) is a standardized way for applications to provide context for Large Language Models (LLMs). It separates the concerns of providing context from the actual LLM interaction, allowing developers to build specialized servers that expose data and functionality to LLM applications.
### Core Concepts
MCP is built around three key primitives:
1. **Resources**: Read-only data sources that provide context to LLMs (similar to GET endpoints)
2. **Tools**: Functions that perform actions when called by LLMs (similar to POST endpoints)
3. **Prompts**: Templates for structuring interactions with LLMs
### Protocol Structure
MCP uses a JSON-RPC based protocol that supports:
- Request/response patterns
- Notifications
- Capability negotiation
- Progress reporting
- Error handling
## Installation and Setup
### Basic Installation
```bash
# Using pip
pip install mcp
# With CLI tools
pip install "mcp[cli]"
# Using uv (recommended)
uv add "mcp[cli]"
```
### Development Environment
For development, install with additional components:
```bash
# For development with all tools
pip install "mcp[dev]"
# For specific components
pip install "mcp[cli,sse]"
```
### Dependencies
MCP has minimal core dependencies, but certain features require additional packages:
- `anyio`: Required for async I/O
- `pydantic`: Required for validation (v2.x)
- `starlette` and `uvicorn`: Required for SSE transport
- `typer`: Required for CLI tools
## Server Architecture
MCP servers in Python can be developed using two main approaches:
1. **FastMCP API**: High-level, decorator-based API (recommended)
2. **Low-Level Server API**: More control, but requires more boilerplate
### Server Types
- **Tool servers**: Provide functions that LLMs can call
- **Resource servers**: Provide data sources LLMs can read
- **Prompt servers**: Provide templates for LLM interactions
- **Hybrid servers**: Combine multiple capabilities
## FastMCP API
The FastMCP API is the recommended way to build MCP servers. It provides a clean, decorator-based interface for defining resources, tools, and prompts.
### Basic Server
```python
from mcp.server.fastmcp import FastMCP
# Create a server with a name
mcp = FastMCP("MyServer")
# Optionally specify dependencies for deployment
mcp = FastMCP("MyServer", dependencies=["pandas", "numpy"])
# Run the server (defaults to stdio transport)
if __name__ == "__main__":
mcp.run()
```
### Server Configuration
FastMCP supports extensive configuration through settings:
```python
from mcp.server.fastmcp import FastMCP
# Configure using environment variables (FASTMCP_*)
mcp = FastMCP(
"MyServer",
# Server settings
debug=True,
log_level="DEBUG",
# HTTP settings (for SSE transport)
host="0.0.0.0",
port=8000,
# Feature settings
warn_on_duplicate_resources=True,
warn_on_duplicate_tools=True,
warn_on_duplicate_prompts=True,
# Dependencies (for installation)
dependencies=["pandas", "numpy"],
)
```
### Running Servers
```python
# Run with stdio transport (default)
mcp.run()
# Run with SSE transport
mcp.run(transport="sse")
# Run programmatically with asyncio
import asyncio
asyncio.run(mcp.run_stdio_async())
asyncio.run(mcp.run_sse_async())
```
## Resources
Resources are read-only data sources that provide context to LLMs. They're identified by URIs and can be static or dynamic.
### Static Resources
```python
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("ResourceServer")
@mcp.resource("static://greeting")
def get_greeting() -> str:
"""A simple static greeting"""
return "Hello, world!"
@mcp.resource("static://data.json", mime_type="application/json")
def get_data() -> str:
"""Return JSON data"""
return '{"key": "value"}'
```
### Dynamic Resource Templates
```python
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("ResourceServer")
@mcp.resource("user://{user_id}/profile")
def get_user_profile(user_id: str) -> str:
"""Get a user profile by ID"""
# Parameters in URI template are passed as function arguments
return f"Profile for user {user_id}"
@mcp.resource("data://{type}/{id}", mime_type="application/json")
async def get_dynamic_data(type: str, id: str) -> str:
"""Get data by type and ID (async support)"""
# Resources can be async functions
data = await fetch_data(type, id)
return data
```
### Resource Types
MCP supports different resource implementations:
```python
from mcp.server.fastmcp import FastMCP
from mcp.server.fastmcp.resources import TextResource, FileResource, HttpResource
mcp = FastMCP("ResourceServer")
# Direct decoration (recommended)
@mcp.resource("greeting://hello")
def get_greeting() -> str:
return "Hello, world!"
# Manual registration
text_resource = TextResource(
uri="text://message",
text="This is a text resource",
name="Message",
description="A simple text message",
mime_type="text/plain"
)
mcp.add_resource(text_resource)
# File resources
file_resource = FileResource(
uri="file:///path/to/file.txt",
path="/path/to/file.txt",
is_binary=False,
mime_type="text/plain"
)
mcp.add_resource(file_resource)
# HTTP resources
http_resource = HttpResource(
uri="http://example.com/data",
url="https://example.com/data",
mime_type="application/json"
)
mcp.add_resource(http_resource)
```
### Resource Return Types
Resources can return:
- `str`: Text content (default)
- `bytes`: Binary content
- Python objects: Automatically converted to JSON
- `Resource` objects: For more control
## Tools
Tools are functions that LLMs can call to perform actions. They can be synchronous or asynchronous.
### Basic Tools
```python
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("ToolServer")
@mcp.tool()
def add(a: int, b: int) -> int:
"""Add two numbers"""
return a + b
@mcp.tool(name="subtract", description="Subtract b from a")
def my_subtract(a: int, b: int) -> int:
"""This docstring is overridden by the description parameter"""
return a - b
```
### Async Tools
```python
from mcp.server.fastmcp import FastMCP
import httpx
mcp = FastMCP("AsyncToolServer")
@mcp.tool()
async def fetch_weather(city: str) -> str:
"""Fetch weather data for a city"""
async with httpx.AsyncClient() as client:
response = await client.get(f"https://api.weather.com/{city}")
return response.text
```
### Tool Parameters
Tools support Pydantic validation for parameters:
```python
from pydantic import BaseModel, Field
from typing import List, Optional
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("ToolServer")
class QueryParams(BaseModel):
query: str
limit: int = Field(default=10, gt=0, le=100)
filters: Optional[List[str]] = None
@mcp.tool()
def search(params: QueryParams) -> str:
"""Search with complex parameters"""
return f"Searching for '{params.query}' with limit {params.limit}"
# Or with direct parameter annotations
@mcp.tool()
def advanced_search(
query: str,
limit: int = Field(default=10, gt=0, le=100),
filters: Optional[List[str]] = None
) -> str:
"""Search with annotated parameters"""
return f"Searching for '{query}' with limit {limit}"
```
### Tool Return Types
Tools can return:
- `str`: Text content (default)
- `bytes`: Binary content
- Python objects: Automatically converted to JSON
- `Image`: For returning image data
- Lists of the above: Multiple content items
```python
from mcp.server.fastmcp import FastMCP, Image
from PIL import Image as PILImage
import io
mcp = FastMCP("ToolServer")
@mcp.tool()
def create_image() -> Image:
"""Create and return an image"""
# Create a simple image using PIL
img = PILImage.new('RGB', (100, 100), color='red')
buffer = io.BytesIO()
img.save(buffer, format="PNG")
buffer.seek(0)
# Return as MCP Image
return Image(data=buffer.getvalue(), format="png")
```
## Prompts
Prompts are templates for structuring interactions with LLMs. They can include static text, dynamic content, and embedded resources.
### Basic Prompts
```python
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("PromptServer")
@mcp.prompt()
def simple_prompt(topic: str) -> str:
"""A simple prompt about a topic"""
return f"Please write about {topic} in detail."
```
### Structured Prompts
```python
from mcp.server.fastmcp import FastMCP
from mcp.server.fastmcp.prompts.base import UserMessage, AssistantMessage
mcp = FastMCP("PromptServer")
@mcp.prompt()
def structured_prompt(code: str, language: str = "python") -> list:
"""A structured prompt with multiple messages"""
return [
UserMessage(f"Please review this {language} code:"),
UserMessage(code),
AssistantMessage("I'll analyze this code thoroughly.")
]
```
### Prompt with Embedded Resources
```python
from mcp.server.fastmcp import FastMCP
from mcp.types import TextContent, EmbeddedResource, TextResourceContents
mcp = FastMCP("PromptServer")
@mcp.prompt()
def resource_prompt(file_path: str) -> list:
"""A prompt that embeds a resource"""
return [
{
"role": "user",
"content": {
"type": "text",
"text": f"Please analyze this file: {file_path}"
}
},
{
"role": "user",
"content": {
"type": "resource",
"resource": {
"uri": f"file://{file_path}",
"text": "File content here",
"mimeType": "text/plain"
}
}
}
]
```
## Context and Lifecycle
MCP servers can maintain context throughout their lifecycle and across requests.
### Context Object
The `Context` object provides access to request metadata and MCP capabilities:
```python
from mcp.server.fastmcp import FastMCP, Context
mcp = FastMCP("ContextServer")
@mcp.tool()
async def tool_with_context(query: str, ctx: Context) -> str:
"""A tool that uses the context object"""
# Log messages to the client
await ctx.debug("Debug message")
await ctx.info(f"Processing query: {query}")
await ctx.warning("This is a warning")
await ctx.error("This is an error")
# Report progress for long-running operations
await ctx.report_progress(50, 100)
# Access request information
request_id = ctx.request_id
client_id = ctx.client_id
# Access the FastMCP server
server = ctx.fastmcp
# Read resources
resource = await ctx.read_resource("file:///path/to/file.txt")
return f"Processed {query}"
```
### Lifespan Management
For more advanced servers, you can use the lifespan API to manage server lifecycle:
```python
from contextlib import asynccontextmanager
from typing import AsyncIterator
from dataclasses import dataclass
from mcp.server.fastmcp import FastMCP, Context
# Define a strongly-typed context class
@dataclass
class AppContext:
db: object # Replace with your actual DB type
config: dict
@asynccontextmanager
async def app_lifespan(server: FastMCP) -> AsyncIterator[AppContext]:
"""Manage application lifecycle with type-safe context"""
print("Server starting up")
# Initialize resources on startup
db = await create_database_connection()
config = load_config()
try:
# Yield context to the server
yield AppContext(db=db, config=config)
finally:
# Clean up on shutdown
await db.close()
print("Server shutting down")
# Create server with lifespan
mcp = FastMCP("LifespanServer", lifespan=app_lifespan)
@mcp.tool()
async def db_query(query: str, ctx: Context) -> str:
"""Query the database using lifespan context"""
# Access typed lifespan context
app_ctx = ctx.request_context.lifespan_context
# Use the database connection from lifespan
result = await app_ctx.db.execute(query)
return str(result)
```
## Transport Protocols
MCP supports multiple transport protocols for client-server communication.
### stdio Transport
stdio is the default transport, ideal for CLI applications and local development:
```python
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("StdioServer")
# Run with stdio transport (default)
if __name__ == "__main__":
mcp.run() # Equivalent to mcp.run(transport="stdio")
```
### SSE Transport
Server-Sent Events (SSE) transport is ideal for web applications:
```python
from mcp.server.fastmcp import FastMCP
mcp = FastMCP(
"SseServer",
host="0.0.0.0", # Listen on all interfaces
port=8000 # Port for SSE server
)
# Run with SSE transport
if __name__ == "__main__":
mcp.run(transport="sse")
```
### Implementing Custom SSE Server
For more control over the SSE implementation:
```python
import asyncio
import uvicorn
from starlette.applications import Starlette
from starlette.routing import Mount, Route
from mcp.server.sse import SseServerTransport
from mcp.server.fastmcp import FastMCP
# Create FastMCP server
mcp = FastMCP("CustomSseServer")
# Define handler for SSE connections
async def handle_sse(request):
# Create SSE transport
sse = SseServerTransport("/messages/")
# Connect SSE streams
async with sse.connect_sse(
request.scope, request.receive, request._send
) as streams:
# Run MCP server with SSE streams
await mcp._mcp_server.run(
streams[0],
streams[1],
mcp._mcp_server.create_initialization_options(),
)
# Create Starlette app with routes
async def create_app():
sse = SseServerTransport("/messages/")
app = Starlette(
debug=True,
routes=[
Route("/sse", endpoint=handle_sse),
Mount("/messages/", app=sse.handle_post_message),
],
)
return app
# Run the custom SSE server
async def main():
app = await create_app()
config = uvicorn.Config(
app,
host="0.0.0.0",
port=8000,
log_level="info",
)
server = uvicorn.Server(config)
await server.serve()
if __name__ == "__main__":
asyncio.run(main())
```
### Using the SSE Transport Implementation
Here's a detailed look at the SSE transport implementation for advanced use cases:
```python
from mcp.server.sse import SseServerTransport
from starlette.applications import Starlette
from starlette.routing import Mount, Route
import uvicorn
from mcp.server.lowlevel import Server
from mcp.server.models import InitializationOptions
# Create low-level server
server = Server("SSE-Example")
# Set up your server handlers
@server.list_tools()
async def list_tools():
# Your implementation
pass
@server.call_tool()
async def call_tool(name, arguments):
# Your implementation
pass
# Create SSE transport
sse = SseServerTransport("/messages/")
# Define SSE connection handler
async def handle_sse(request):
async with sse.connect_sse(
request.scope, request.receive, request._send
) as streams:
read_stream, write_stream = streams
# Run MCP server with SSE streams
await server.run(
read_stream,
write_stream,
server.create_initialization_options(),
)
# Create Starlette app
app = Starlette(
debug=True,
routes=[
Route("/sse", endpoint=handle_sse),
Mount("/messages/", app=sse.handle_post_message),
],
)
# Run the server
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=8000)
```
## Error Handling
Proper error handling is essential for robust MCP servers.
### Tool Error Handling
```python
from mcp.server.fastmcp import FastMCP
from mcp.server.fastmcp.exceptions import ToolError
mcp = FastMCP("ErrorHandlingServer")
@mcp.tool()
def divide(a: float, b: float) -> float:
"""Divide a by b"""
try:
if b == 0:
raise ValueError("Cannot divide by zero")
return a / b
except Exception as e:
# Option 1: Raise ToolError for structured error response
raise ToolError(f"Error in divide tool: {str(e)}")
# Option 2: Return error as string (less ideal)
# return f"Error: {str(e)}"
```
### Resource Error Handling
```python
from mcp.server.fastmcp import FastMCP
from mcp.server.fastmcp.exceptions import ResourceError
mcp = FastMCP("ErrorHandlingServer")
@mcp.resource("file://{path}")
async def read_file(path: str) -> str:
"""Read a file by path"""
try:
with open(path, 'r') as f:
return f.read()
except FileNotFoundError:
raise ResourceError(f"File not found: {path}")
except PermissionError:
raise ResourceError(f"Permission denied: {path}")
except Exception as e:
raise ResourceError(f"Error reading file: {str(e)}")
```
### Global Error Handling
For low-level servers, you can implement global error handling:
```python
from mcp.server.lowlevel import Server
from mcp.types import ErrorData
server = Server("ErrorHandlingServer")
async def run_with_error_handling():
try:
# Run server with error handling
await server.run(
read_stream,
write_stream,
server.create_initialization_options(),
raise_exceptions=False, # Return errors as responses
)
except Exception as e:
# Handle fatal server errors
print(f"Fatal server error: {str(e)}")
```
## Advanced Topics
### Multiple Server Coordination
You can orchestrate multiple MCP servers for complex use cases:
```python
import asyncio
from mcp.server.fastmcp import FastMCP
# Create multiple servers
data_server = FastMCP("DataServer")
tool_server = FastMCP("ToolServer")
prompt_server = FastMCP("PromptServer")
# Define handlers for each server
@data_server.resource("data://users")
def get_users():
return "[User data]"
@tool_server.tool()
def process_user(user_id: str):
return f"Processing user {user_id}"
@prompt_server.prompt()
def user_analysis(user_id: str):
return f"Analyze this user: {user_id}"
# Run servers concurrently
async def run_servers():
await asyncio.gather(
data_server.run_stdio_async(),
tool_server.run_sse_async(),
prompt_server.run_sse_async(),
)
if __name__ == "__main__":
asyncio.run(run_servers())
```
### Progress Reporting for Long-Running Operations
```python
from mcp.server.fastmcp import FastMCP, Context
import asyncio
mcp = FastMCP("ProgressServer")
@mcp.tool()
async def long_operation(steps: int, ctx: Context) -> str:
"""A long-running operation with progress reporting"""
# Log start
await ctx.info(f"Starting operation with {steps} steps")
# Process steps with progress
for i in range(steps):
# Report current progress
await ctx.report_progress(i, steps)
# Log current step
await ctx.info(f"Processing step {i+1}/{steps}")
# Simulate work
await asyncio.sleep(1)
# Report completion
await ctx.report_progress(steps, steps)
return f"Completed {steps} steps"
```
## Low-Level Server API
For more control, you can use the low-level Server API:
```python
import asyncio
import anyio
from mcp.server.lowlevel import Server, NotificationOptions
from mcp.server.models import InitializationOptions
from mcp.server.stdio import stdio_server
import mcp.types as types
# Create a server
server = Server("LowLevelServer")
# Define handlers
@server.list_tools()
async def handle_list_tools():
return [
types.Tool(
name="echo",
description="Echo the input",
inputSchema={
"type": "object",
"properties": {
"text": {
"type": "string",
"description": "Text to echo"
}
},
"required": ["text"]
}
)
]
@server.call_tool()
async def handle_call_tool(name, arguments):
if name == "echo":
text = arguments.get("text", "")
return [types.TextContent(type="text", text=f"Echo: {text}")]
raise ValueError(f"Unknown tool: {name}")
# Run the server
async def run_server():
async with stdio_server() as (read_stream, write_stream):
await server.run(
read_stream,
write_stream,
InitializationOptions(
server_name="LowLevelServer",
server_version="1.0.0",
capabilities=server.get_capabilities(
notification_options=NotificationOptions(),
experimental_capabilities={},
),
),
)
if __name__ == "__main__":
asyncio.run(run_server())
```
## Practical Examples
### Complete SSE Server Example
Here's a comprehensive example of a server using SSE transport:
### Complex SQLite Explorer
### Fastmcp Command-Line Tool Helper
## Error Handling
Proper error handling is crucial for building robust MCP servers. Here are some common error handling patterns:
### Tool Errors
Tools should use structured error handling with the `ToolError` exception:
```python
from mcp.server.fastmcp import FastMCP
from mcp.server.fastmcp.exceptions import ToolError
mcp = FastMCP("ErrorHandlingServer")
@mcp.tool()
def divide(a: float, b: float) -> float:
"""Divide a by b"""
try:
if b == 0:
raise ValueError("Cannot divide by zero")
return a / b
except Exception as e:
# Structured error response
raise ToolError(f"Division error: {str(e)}")
```
### Resource Errors
Resources should use the `ResourceError` exception for structured errors:
```python
from mcp.server.fastmcp import FastMCP
from mcp.server.fastmcp.exceptions import ResourceError
mcp = FastMCP("ErrorHandlingServer")
@mcp.resource("file://{path}")
def read_file(path: str) -> str:
"""Read a file by path"""
try:
with open(path, 'r') as f:
return f.read()
except FileNotFoundError:
raise ResourceError(f"File not found: {path}")
```
### Validation Errors
Pydantic validation ensures proper input validation:
```python
from pydantic import BaseModel, Field, ValidationError
from mcp.server.fastmcp import FastMCP
class UserInput(BaseModel):
name: str = Field(..., min_length=2)
age: int = Field(..., ge=0, le=150)
mcp = FastMCP("ValidationServer")
@mcp.tool()
def process_user(user: UserInput) -> str:
"""Process user data"""
return f"Processed user: {user.name}, age {user.age}"
```
## Conclusion
This documentation covers the essential aspects of building MCP servers with the Python SDK. By understanding the core concepts, server architecture, and implementation patterns, you can develop robust MCP servers that provide resources, tools, and prompts to LLM applications.
The examples provided demonstrate complex use cases including SSE transport, database integration, visualization, and error handling. Use these as starting points for your own implementations, adapting them to your specific requirements.
For further assistance, refer to:
- The [MCP specification](https://spec.modelcontextprotocol.io)
- The [GitHub repository](https://github.com/modelcontextprotocol/python-sdk)
- The [Examples directory](https://github.com/modelcontextprotocol/python-sdk/tree/main/examples)
Happy building with MCP!