README.md•6.01 kB
> This project was created as an enchancement to [Prompt Cleaner](https://github.com/Da-Colon/prompt-cleaner-mcp) that was written in Typescript. As well as an enhancement. This is my 'rosetta' stone project. That I can easily follow to have a deeper understanding on Python. Oh obvious this was Coded with the help of Cursor
# MCP Prompt Cleaner

A Model Context Protocol (MCP) server that uses AI to enhance and clean raw prompts, making them more clear, actionable, and effective.
## Features
- **AI-Powered Enhancement**: Uses large language models to improve prompt clarity and specificity
- **Concise System Prompt**: Uses a structured, efficient prompt format for consistent results
- **Context-Aware Processing**: Accepts additional context to guide the enhancement process
- **Mode-Specific Optimization**: Supports both "general" and "code" modes for different use cases
- **Quality Assessment**: Provides quality scores and detailed feedback on enhanced prompts
- **Two-Level Retry Strategy**: HTTP-level retries for network issues, content-level retries for AI output quality
- **Exponential Backoff**: Robust error handling with jitter to prevent thundering herd
- **MCP Integration**: Full MCP protocol compliance with stdio transport
- **Production Ready**: Comprehensive test coverage, clean code, and robust error handling
## Installation
### Using uv (recommended)
```bash
uv sync
```
### Using pip
```bash
pip install -e .
```
Note: This project uses `pyproject.toml` for dependency management.
## Configuration
### Local LLM (LMStudio) - Default Setup
The server is configured by default to work with local LLMs like LMStudio. No API key is required:
```bash
# Default configuration (no .env file needed)
# LLM_API_ENDPOINT=http://localhost:1234/v1/chat/completions
# LLM_API_KEY=None (not required for local LLMs)
# LLM_MODEL=local-model
```
### Cloud LLM (OpenAI, Anthropic, etc.)
For cloud-based LLMs, create a `.env` file in the project root:
```bash
# LLM API Configuration
LLM_API_ENDPOINT=https://api.openai.com/v1/chat/completions
LLM_API_KEY=your-api-key-here
LLM_MODEL=gpt-4
LLM_TIMEOUT=60
LLM_MAX_TOKENS=600
# Retry Configuration
CONTENT_MAX_RETRIES=2
```
Note: `.env` file support is provided by `pydantic-settings` - no additional dependencies required.
### LMStudio Setup
1. Download and install [LMStudio](https://lmstudio.ai/)
2. Start LMStudio and load a model
3. Start the local server (usually on `http://localhost:1234`)
4. The MCP server will automatically connect to your local LLM
## Running the Server
To run the MCP server:
```bash
python main.py
```
## Tool Usage
The server provides a `clean_prompt` tool that accepts:
- `raw_prompt` (required): The user's raw, unpolished prompt
- `context` (optional): Additional context about the task
- `mode` (optional): Processing mode - "general" or "code" (default: "general")
- `temperature` (optional): AI sampling temperature 0.0-1.0 (default: 0.2)
### Example Tool Call
The tool is called directly with parameters:
```python
# Direct function call
result = await clean_prompt_tool(
raw_prompt="help me write code",
context="web development with Python",
mode="code",
temperature=0.1
)
```
Or via MCP protocol:
```json
{
"method": "tools/call",
"params": {
"name": "clean_prompt",
"arguments": {
"raw_prompt": "help me write code",
"context": "web development with Python",
"mode": "code",
"temperature": 0.1
}
}
}
```
### Example Response
```json
{
"cleaned": "Help me write Python code for web development. I need assistance with [specific task] using [framework/library]. The code should [requirements] and handle [error cases].",
"notes": [
"Added placeholders for specific task and framework",
"Specified requirements and error handling"
],
"open_questions": [
"What specific web development task?",
"Which Python framework?",
"What are the exact requirements?"
],
"risks": ["Without specific details, the code may not meet requirements"],
"unchanged": false,
"quality": {
"score": 4,
"reasons": ["Clear structure", "Identifies missing information", "Actionable guidance"]
}
}
```
## MCP Client Configuration
### Claude Desktop
#### For Local LLM (LMStudio) - No API Key Required
```json
{
"mcpServers": {
"mcp-prompt-cleaner": {
"command": "python",
"args": ["main.py"]
}
}
}
```
#### For Cloud LLM (OpenAI, etc.) - API Key Required
```json
{
"mcpServers": {
"mcp-prompt-cleaner": {
"command": "python",
"args": ["main.py"],
"env": {
"LLM_API_KEY": "your-api-key-here",
"LLM_API_ENDPOINT": "https://api.openai.com/v1/chat/completions",
"LLM_MODEL": "gpt-4"
}
}
}
}
```
### Other MCP Clients
The server uses stdio transport and can be configured with any MCP-compatible client by pointing to the `main.py` file.
## Development
### Running Tests
```bash
uv run pytest
```
### Test Coverage
The project includes comprehensive tests for:
- JSON extraction from mixed content
- LLM client with retry logic
- Prompt cleaning functionality
- MCP protocol integration
### Project Structure
```
├── main.py # MCP server with tool registration
├── config.py # Configuration management
├── schemas.py # Pydantic models for validation
├── tools/
│ └── cleaner.py # Main clean_prompt implementation
├── llm/
│ └── client.py # AI API client with retry logic
├── utils/
│ └── json_extractor.py # JSON extraction utilities
├── prompts/
│ └── cleaner.md # AI system prompt
└── tests/ # Comprehensive test suite
```
## Requirements
- Python 3.11+
- MCP Python SDK
- httpx for HTTP client
- pydantic for data validation
- pytest for testing
## License
MIT