This MCP server enables AI assistants to test and interact with other MCP servers through comprehensive testing and debugging tools.
Connection Management: Connect to MCP servers via STDIO (file paths) or HTTP/HTTPS URLs with auto-detect protocols, check connection status with detailed statistics and server information, and disconnect safely from active connections.
Tool Testing: List all available tools with complete input schemas and execute tools with custom arguments, receiving detailed results with timing metadata.
Resource Testing: List all resources with URIs, names, descriptions, and MIME types, then read both text and binary content by URI.
Prompt Testing: List all prompts with argument schemas, get rendered prompts with custom arguments, and execute prompts end-to-end with actual LLM inference supporting template variable substitution, custom LLM configuration (URL, model, API key, temperature, max tokens), and automatic JSON response parsing.
Utility Functions: Perform health checks, test connectivity with ping/pong, echo messages for basic functionality testing, and add numbers for simple calculations.
Key Use Cases: Test MCP server implementations during development, debug tool execution and resource access, validate prompt templates with real LLM responses, monitor connection health and usage statistics, and discover capabilities of unknown MCP servers.
mcp-test-mcp
An MCP server that helps AI assistants test other MCP servers. It provides tools to connect to target MCP servers, discover their capabilities, execute tools, read resources, and test prompts—all through proper MCP protocol communication.
Features
Connection Management: Connect to any MCP server (STDIO or HTTP transport), auto-detect protocols, track connection state
Tool Testing: List all tools with complete input schemas, call tools with arbitrary arguments, get detailed execution results
Resource Testing: List all resources with metadata, read text and binary content
Prompt Testing: List all prompts with argument schemas, get rendered prompts with custom arguments
LLM Integration: Execute prompts end-to-end with actual LLM inference, supports template variables and JSON extraction
Installation
Prerequisites: Node.js 16+ and Python 3.11+
Choose your AI coding tool:
Config file location:
macOS:
~/Library/Application Support/Claude/claude_desktop_config.jsonWindows:
%APPDATA%/Claude/claude_desktop_config.json
Configuration:
Or use Claude Code CLI:
Config file location:
Global:
~/.cursor/mcp.jsonProject:
.cursor/mcp.json
Or access via: File → Preferences → Cursor Settings → MCP
Configuration:
Config file location: ~/.codeium/windsurf/mcp_config.json
Or access via: Windsurf Settings → Cascade → Plugins
Configuration:
Requires VS Code 1.99+ with chat.agent.enabled setting enabled.
Config file location:
Workspace:
.vscode/mcp.jsonGlobal: Run
MCP: Open User Configurationfrom Command Palette
Configuration:
Note: VS Code uses servers instead of mcpServers and recommends camelCase naming.
Config file location: ~/.codex/config.toml
Add via CLI:
Or add manually to config.toml:
To use the execute_prompt_with_llm tool, add environment variables to your configuration:
JSON format (Claude, Cursor, Windsurf, VS Code):
TOML format (Codex):
Quick Start
Once configured, test MCP servers through natural conversation:
Connect: "Connect to my MCP server at /path/to/server"
Discover: "What tools does it have?"
Test: "Call the echo tool with message 'Hello'"
Status: "What's the connection status?"
Disconnect: "Disconnect from the server"
Available Tools
Connection Management
connect_to_server: Connect to a target MCP server (stdio or HTTP)
disconnect: Close active connection
get_connection_status: Check connection state and statistics
Tool Testing
list_tools: Get all tools with complete schemas
call_tool: Execute a tool with arguments
Resource Testing
list_resources: Get all resources with metadata
read_resource: Read resource content by URI
Prompt Testing
list_prompts: Get all prompts with argument schemas
get_prompt: Get rendered prompt with arguments
execute_prompt_with_llm: Execute prompts with actual LLM inference
Utility
health_check: Verify server is running
ping: Test connectivity (returns "pong")
echo: Echo a message back
add: Add two numbers
Environment Variables
Core
MCP_TEST_LOG_LEVEL: Logging level (DEBUG, INFO, WARNING, ERROR). Default: INFO
MCP_TEST_CONNECT_TIMEOUT: Connection timeout in seconds. Default: 30.0
LLM Integration (for execute_prompt_with_llm)
LLM_URL: LLM API endpoint URL
LLM_MODEL_NAME: Model name
LLM_API_KEY: API key
Development
Documentation
Testing Guide - Complete guide with LLM integration examples
License
MIT License - see LICENSE for details.