Supports using OpenAI's models (o3-2025-04-16) for prompt evaluation, offering analysis of prompt quality, strengths, weaknesses, and suggested improvements.
CC-Meta (Claude Code Metaprompter)
CC-Meta lets you iterate on your Claude Code prompts without leaving the terminal. Instead of switching to the web client to test and refine prompts, you get instant AI feedback on clarity, specificity, and completeness right in your current workflow. This keeps you in context and speeds up the process of crafting effective prompts.
An MCP (Model Context Protocol) server that evaluates prompts using AI to provide detailed feedback on clarity, completeness, and effectiveness.
Before & After: Asking "Build a calculator app"
Related MCP server: Interactive Feedback MCP
Features
Multi-model support - Use any OpenAI or Anthropic model
Flexible API keys - Provide your own API key for each evaluation
Two tools available:
ping- Test if the server is connected and workingevaluate- Get AI-powered analysis of your prompts
Setup
Install dependencies:
npm install # or yarn installBuild the project:
npm run buildConfigure your model and API key: Edit the
.mcp.jsonfile to set your preferred model and API key:{ "mcpServers": { "prompt-evaluator": { "command": "node", "args": ["./prompt-evaluator-mcp/start.js"], "env": { "PROMPT_EVAL_MODEL": "sonnet-4", // or "o3", "opus-4" "PROMPT_EVAL_API_KEY": "your-api-key-here" } } } }
Usage
Once configured, you have multiple ways to evaluate prompts:
Quick Slash Command (Recommended)
Direct MCP Function Calls
Supported Models
OpenAI:
o3(o3-2025-04-16)Anthropic:
opus-4(claude-opus-4-20250514),sonnet-4(claude-sonnet-4-20250514)
The AI evaluation provides:
Score from 0-10
Specific strengths of your prompt
Areas for improvement
Suggested rewrites when needed
Analysis of:
Clarity of intent
Specificity of requirements
Context provided
Actionability
Edge cases considered
Customization
The evaluation prompt is stored in src/prompt.ts and can be easily customized:
Edit the prompt template to change evaluation criteria
Modify the scoring rubric and weights
Adjust the output format
Add domain-specific evaluation rules
After making changes, rebuild with npm run build.