Test MCP Server
A dual-transport Model Context Protocol (MCP) server that exposes your API as tools to LLM clients.
Supports two transports:
Stdio (local): For Claude Desktop, Cursor, Windsurf
HTTP/SSE (remote): For OpenAI Responses API and web clients
What is MCP?
The Model Context Protocol (MCP) is a standard that connects AI systems with external tools and data sources. MCP servers expose tools (functions), resources (data), and prompts that LLMs can use via a JSON-RPC interface over stdio.
Architecture
This is a proper MCP server that:
✅ Supports dual transports: stdio (local) and HTTP/SSE (remote)
✅ Uses the official MCP Python SDK (
mcp
package) for stdio✅ Uses FastAPI for HTTP/SSE transport
✅ Can be launched by MCP clients (Claude Desktop, Cursor, Windsurf)
✅ Can be called remotely by OpenAI Responses API
✅ Exposes tools with strict JSON schemas for deterministic behavior
✅ Includes authentication, rate limiting, and security best practices
✅ Follows SOLID principles with clean separation of concerns
Project Structure
Installation
Install dependencies:
Configure environment (optional):
Available Tools
1. search_items
Search for items with pagination support.
Input Schema:
Output:
2. get_item
Retrieve detailed information about a single item.
Input Schema:
Output:
3. health
Check server health status.
Input Schema: {}
(no parameters)
Output:
Usage
Local Usage (Stdio Transport)
Testing Manually
Run the stdio server:
Then send a JSON-RPC request via stdin:
Connecting to Claude Desktop
Open your Claude Desktop config file:
macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
Windows:
%APPDATA%\Claude\claude_desktop_config.json
Add this server configuration:
Restart Claude Desktop
The tools will appear in Claude's tool palette
Connecting to Cursor/Windsurf
Add the server to your MCP configuration (similar process to Claude Desktop).
Remote Usage (HTTP/SSE Transport)
Quick Start
Start the HTTP server:
Server runs at http://localhost:8000
Test with curl:
Using with OpenAI Responses API
Once deployed to a public URL:
See
Customizing for Your API
Option 1: Replace Mock Data with Real API Calls
Edit test_mcp/tools.py
and uncomment the real API call examples:
Option 2: Add New Tools
Define the tool schema in
test_mcp/server.py
:
Implement the tool in
test_mcp/tools.py
:
Wire it up in the
call_tool
handler:
Best Practices
✅ DO:
Keep tool outputs compact and stable - LLMs rely on predictable shapes
Use opaque cursors for pagination (not page numbers)
Validate inputs strictly with JSON schemas (min/max, enums, defaults)
Return clear error messages - avoid HTML or stack traces
Add timeouts and retries for external API calls
Never expose secrets in tool outputs
❌ DON'T:
Don't return huge blobs of data - summarize or paginate
Don't use page numbers - use cursors for deterministic pagination
Don't hardcode API keys - use environment variables
Don't expose internal IDs or PII unless required
Don't make tools that have side effects without idempotency keys
Key Patterns
Separation of Concerns:
server.py
: MCP protocol handling (stdio, JSON-RPC)tools.py
: Business logic and API callsconfig.py
: Configuration management
Type Safety:
Pydantic models for validation
Python type hints throughout
Strict JSON schemas for tool inputs
Error Handling:
Graceful degradation
Clear error messages
Timeout handling
Determinism:
Stable output formats
Predictable pagination
Consistent error codes
Troubleshooting
Server won't start
Check Python version (3.10+)
Verify all dependencies installed:
pip install -r requirements.txt
Check for syntax errors:
python -m py_compile main.py
Tools not appearing in Claude Desktop
Verify the path in
claude_desktop_config.json
is absoluteCheck Claude Desktop logs for errors
Restart Claude Desktop after config changes
API calls failing
Verify
API_BASE_URL
andAPI_KEY
in environmentCheck network connectivity
Add logging to
tools.py
to debug
Environment Variables
API_BASE_URL
: Base URL for your API (default:http://localhost:8000/api/v1
)API_KEY
: API authentication key (optional)ENVIRONMENT
: Environment name (default:development
)DEBUG
: Enable debug logging (default:true
)
License
MIT
Contributing
Follow SOLID principles
Add type hints to all functions
Update
ThingsIveLearned.md
with new patternsTest with Claude Desktop before committing
This server cannot be installed
hybrid server
The server is able to function both locally and remotely, depending on the configuration or use case.
A dual-transport MCP server that exposes your API as tools to LLM clients, supporting both stdio transport for local clients like Claude Desktop and HTTP/SSE transport for remote clients like OpenAI's Responses API.