MCP Prompt Tester
A simple MCP server that allows agents to test LLM prompts with different providers.
Features
Test prompts with OpenAI and Anthropic models
Configure system prompts, user prompts, and other parameters
Get formatted responses or error messages
Easy environment setup with .env file support
Related MCP server: MCP Anthropic Server
Installation
API Key Setup
The server requires API keys for the providers you want to use. You can set these up in two ways:
Option 1: Environment Variables
Set the following environment variables:
OPENAI_API_KEY- Your OpenAI API keyANTHROPIC_API_KEY- Your Anthropic API key
Option 2: .env File (Recommended)
Create a file named
.envin your project directory or home directoryAdd your API keys in the following format:
The server will automatically detect and load these keys
For convenience, a sample template is included as .env.example.
Usage
Start the server using stdio (default) or SSE transport:
Available Tools
The server exposes the following tools for MCP-empowered agents:
1. list_providers
Retrieves available LLM providers and their default models.
Parameters:
None required
Example Response:
2. test_comparison
Compares multiple prompts side-by-side, allowing you to test different providers, models, and parameters simultaneously.
Parameters:
comparisons(array): A list of 1-4 comparison configurations, each containing:provider(string): The LLM provider to use ("openai" or "anthropic")model(string): The model namesystem_prompt(string): The system prompt (instructions for the model)user_prompt(string): The user's message/prompttemperature(number, optional): Controls randomnessmax_tokens(integer, optional): Maximum number of tokens to generatetop_p(number, optional): Controls diversity via nucleus sampling
Example Usage:
3. test_multiturn_conversation
Manages multi-turn conversations with LLM providers, allowing you to create and maintain stateful conversations.
Modes:
start: Begins a new conversationcontinue: Continues an existing conversationget: Retrieves conversation historylist: Lists all active conversationsclose: Closes a conversation
Parameters:
mode(string): Operation mode ("start", "continue", "get", "list", or "close")conversation_id(string): Unique ID for the conversation (required for continue, get, close modes)provider(string): The LLM provider (required for start mode)model(string): The model name (required for start mode)system_prompt(string): The system prompt (required for start mode)user_prompt(string): The user message (used in start and continue modes)temperature(number, optional): Temperature parameter for the modelmax_tokens(integer, optional): Maximum tokens to generatetop_p(number, optional): Top-p sampling parameter
Example Usage (Starting a Conversation):
Example Usage (Continuing a Conversation):
Example Usage for Agents
Using the MCP client, an agent can use the tools like this:
MCP Agent Integration
For MCP-empowered agents, integration is straightforward. When your agent needs to test LLM prompts:
Discovery: The agent can use
list_providersto discover available models and their capabilitiesSimple Testing: For quick tests, use the
test_comparisontool with a single configurationComparison: When the agent needs to evaluate different prompts or models, it can use
test_comparisonwith multiple configurationsStateful Interactions: For multi-turn conversations, the agent can manage a conversation using the
test_multiturn_conversationtool
This allows agents to:
Test prompt variants to find the most effective phrasing
Compare different models for specific tasks
Maintain context in multi-turn conversations
Optimize parameters like temperature and max_tokens
Track token usage and costs during development
Configuration
You can set API keys and optional tracing configurations using environment variables:
Required API Keys
OPENAI_API_KEY- Your OpenAI API keyANTHROPIC_API_KEY- Your Anthropic API key
Optional Langfuse Tracing
The server supports Langfuse for tracing and observability of LLM calls. These settings are optional:
LANGFUSE_SECRET_KEY- Your Langfuse secret keyLANGFUSE_PUBLIC_KEY- Your Langfuse public keyLANGFUSE_HOST- URL of your Langfuse instance
If you don't want to use Langfuse tracing, simply leave these settings empty.