Supports OpenAI's API for AI-powered prompt enhancement and cleaning using their language models like GPT-4
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@MCP Prompt Cleanerclean this prompt: 'make a website' with context 'for my bakery'"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
This project was created as an enchancement to Prompt Cleaner that was written in Typescript. As well as an enhancement. This is my 'rosetta' stone project. That I can easily follow to have a deeper understanding on Python. Oh obvious this was Coded with the help of Cursor
MCP Prompt Cleaner

A Model Context Protocol (MCP) server that uses AI to enhance and clean raw prompts, making them more clear, actionable, and effective.
Features
AI-Powered Enhancement: Uses large language models to improve prompt clarity and specificity
Concise System Prompt: Uses a structured, efficient prompt format for consistent results
Context-Aware Processing: Accepts additional context to guide the enhancement process
Mode-Specific Optimization: Supports both "general" and "code" modes for different use cases
Quality Assessment: Provides quality scores and detailed feedback on enhanced prompts
Two-Level Retry Strategy: HTTP-level retries for network issues, content-level retries for AI output quality
Exponential Backoff: Robust error handling with jitter to prevent thundering herd
MCP Integration: Full MCP protocol compliance with stdio transport
Production Ready: Comprehensive test coverage, clean code, and robust error handling
Installation
Using uv (recommended)
uv syncUsing pip
pip install -e .Note: This project uses pyproject.toml for dependency management.
Configuration
Local LLM (LMStudio) - Default Setup
The server is configured by default to work with local LLMs like LMStudio. No API key is required:
# Default configuration (no .env file needed)
# LLM_API_ENDPOINT=http://localhost:1234/v1/chat/completions
# LLM_API_KEY=None (not required for local LLMs)
# LLM_MODEL=local-modelCloud LLM (OpenAI, Anthropic, etc.)
For cloud-based LLMs, create a .env file in the project root:
# LLM API Configuration
LLM_API_ENDPOINT=https://api.openai.com/v1/chat/completions
LLM_API_KEY=your-api-key-here
LLM_MODEL=gpt-4
LLM_TIMEOUT=60
LLM_MAX_TOKENS=600
# Retry Configuration
CONTENT_MAX_RETRIES=2Note: .env file support is provided by pydantic-settings - no additional dependencies required.
LMStudio Setup
Download and install LMStudio
Start LMStudio and load a model
Start the local server (usually on
http://localhost:1234)The MCP server will automatically connect to your local LLM
Running the Server
To run the MCP server:
python main.pyTool Usage
The server provides a clean_prompt tool that accepts:
raw_prompt(required): The user's raw, unpolished promptcontext(optional): Additional context about the taskmode(optional): Processing mode - "general" or "code" (default: "general")temperature(optional): AI sampling temperature 0.0-1.0 (default: 0.2)
Example Tool Call
The tool is called directly with parameters:
# Direct function call
result = await clean_prompt_tool(
raw_prompt="help me write code",
context="web development with Python",
mode="code",
temperature=0.1
)Or via MCP protocol:
{
"method": "tools/call",
"params": {
"name": "clean_prompt",
"arguments": {
"raw_prompt": "help me write code",
"context": "web development with Python",
"mode": "code",
"temperature": 0.1
}
}
}Example Response
{
"cleaned": "Help me write Python code for web development. I need assistance with [specific task] using [framework/library]. The code should [requirements] and handle [error cases].",
"notes": [
"Added placeholders for specific task and framework",
"Specified requirements and error handling"
],
"open_questions": [
"What specific web development task?",
"Which Python framework?",
"What are the exact requirements?"
],
"risks": ["Without specific details, the code may not meet requirements"],
"unchanged": false,
"quality": {
"score": 4,
"reasons": ["Clear structure", "Identifies missing information", "Actionable guidance"]
}
}MCP Client Configuration
Claude Desktop
For Local LLM (LMStudio) - No API Key Required
{
"mcpServers": {
"mcp-prompt-cleaner": {
"command": "python",
"args": ["main.py"]
}
}
}For Cloud LLM (OpenAI, etc.) - API Key Required
{
"mcpServers": {
"mcp-prompt-cleaner": {
"command": "python",
"args": ["main.py"],
"env": {
"LLM_API_KEY": "your-api-key-here",
"LLM_API_ENDPOINT": "https://api.openai.com/v1/chat/completions",
"LLM_MODEL": "gpt-4"
}
}
}
}Other MCP Clients
The server uses stdio transport and can be configured with any MCP-compatible client by pointing to the main.py file.
Development
Running Tests
uv run pytestTest Coverage
The project includes comprehensive tests for:
JSON extraction from mixed content
LLM client with retry logic
Prompt cleaning functionality
MCP protocol integration
Project Structure
├── main.py # MCP server with tool registration
├── config.py # Configuration management
├── schemas.py # Pydantic models for validation
├── tools/
│ └── cleaner.py # Main clean_prompt implementation
├── llm/
│ └── client.py # AI API client with retry logic
├── utils/
│ └── json_extractor.py # JSON extraction utilities
├── prompts/
│ └── cleaner.md # AI system prompt
└── tests/ # Comprehensive test suiteRequirements
Python 3.11+
MCP Python SDK
httpx for HTTP client
pydantic for data validation
pytest for testing
License
MIT