README.md•2.93 kB
# CC-Meta (Claude Code Metaprompter)
CC-Meta lets you iterate on your Claude Code prompts without leaving the terminal. Instead of switching to the web client to test and refine prompts, you get instant AI feedback on clarity, specificity, and completeness right in your current workflow. This keeps you in context and speeds up the process of crafting effective prompts.
An MCP (Model Context Protocol) server that evaluates prompts using AI to provide detailed feedback on clarity, completeness, and effectiveness.
## Before & After: Asking "Build a calculator app"
<table>
<tr>
<td align="center"><b>Without CC-Meta</b></td>
<td align="center"><b>With CC-Meta</b></td>
</tr>
<tr>
<td><img width="100%" alt="Without CC-Meta - vague prompt" src="https://github.com/user-attachments/assets/84d81242-ca29-4c74-aea3-23be135355dd" /></td>
<td><img width="100%" alt="With CC-Meta - detailed feedback" src="https://github.com/user-attachments/assets/90602ddf-d2bd-46a1-9c49-d4145c54d395" /></td>
</tr>
</table>
## Features
- **Multi-model support** - Use any OpenAI or Anthropic model
- **Flexible API keys** - Provide your own API key for each evaluation
- **Two tools available:**
- `ping` - Test if the server is connected and working
- `evaluate` - Get AI-powered analysis of your prompts
## Setup
1. **Install dependencies:**
```bash
npm install
# or
yarn install
```
2. **Build the project:**
```bash
npm run build
```
3. **Configure your model and API key:**
Edit the `.mcp.json` file to set your preferred model and API key:
```json
{
"mcpServers": {
"prompt-evaluator": {
"command": "node",
"args": ["./prompt-evaluator-mcp/start.js"],
"env": {
"PROMPT_EVAL_MODEL": "sonnet-4", // or "o3", "opus-4"
"PROMPT_EVAL_API_KEY": "your-api-key-here"
}
}
}
}
```
## Usage
Once configured, you have multiple ways to evaluate prompts:
### Quick Slash Command (Recommended)
```
/meta Your prompt here without quotes
```
### Direct MCP Function Calls
```
mcp_prompt-evaluator_ping() # Test connection
mcp_prompt-evaluator_evaluate("Your prompt to evaluate")
```
### Supported Models
- **OpenAI**: `o3` (o3-2025-04-16)
- **Anthropic**: `opus-4` (claude-opus-4-20250514), `sonnet-4` (claude-sonnet-4-20250514)
The AI evaluation provides:
- Score from 0-10
- Specific strengths of your prompt
- Areas for improvement
- Suggested rewrites when needed
- Analysis of:
- Clarity of intent
- Specificity of requirements
- Context provided
- Actionability
- Edge cases considered
## Customization
The evaluation prompt is stored in `src/prompt.ts` and can be easily customized:
- Edit the prompt template to change evaluation criteria
- Modify the scoring rubric and weights
- Adjust the output format
- Add domain-specific evaluation rules
After making changes, rebuild with `npm run build`.