Skip to main content
Glama

MCP Prompt Cleaner

by Da-Colon

This project was created as an enchancement to Prompt Cleaner that was written in Typescript. As well as an enhancement. This is my 'rosetta' stone project. That I can easily follow to have a deeper understanding on Python. Oh obvious this was Coded with the help of Cursor

MCP Prompt Cleaner

Prompt Cleaner Banner

A Model Context Protocol (MCP) server that uses AI to enhance and clean raw prompts, making them more clear, actionable, and effective.

Features

  • AI-Powered Enhancement: Uses large language models to improve prompt clarity and specificity

  • Concise System Prompt: Uses a structured, efficient prompt format for consistent results

  • Context-Aware Processing: Accepts additional context to guide the enhancement process

  • Mode-Specific Optimization: Supports both "general" and "code" modes for different use cases

  • Quality Assessment: Provides quality scores and detailed feedback on enhanced prompts

  • Two-Level Retry Strategy: HTTP-level retries for network issues, content-level retries for AI output quality

  • Exponential Backoff: Robust error handling with jitter to prevent thundering herd

  • MCP Integration: Full MCP protocol compliance with stdio transport

  • Production Ready: Comprehensive test coverage, clean code, and robust error handling

Installation

Using uv (recommended)

uv sync

Using pip

pip install -e .

Note: This project uses pyproject.toml for dependency management.

Configuration

Local LLM (LMStudio) - Default Setup

The server is configured by default to work with local LLMs like LMStudio. No API key is required:

# Default configuration (no .env file needed) # LLM_API_ENDPOINT=http://localhost:1234/v1/chat/completions # LLM_API_KEY=None (not required for local LLMs) # LLM_MODEL=local-model

Cloud LLM (OpenAI, Anthropic, etc.)

For cloud-based LLMs, create a .env file in the project root:

# LLM API Configuration LLM_API_ENDPOINT=https://api.openai.com/v1/chat/completions LLM_API_KEY=your-api-key-here LLM_MODEL=gpt-4 LLM_TIMEOUT=60 LLM_MAX_TOKENS=600 # Retry Configuration CONTENT_MAX_RETRIES=2

Note: .env file support is provided by pydantic-settings - no additional dependencies required.

LMStudio Setup

  1. Download and install LMStudio

  2. Start LMStudio and load a model

  3. Start the local server (usually on http://localhost:1234)

  4. The MCP server will automatically connect to your local LLM

Running the Server

To run the MCP server:

python main.py

Tool Usage

The server provides a clean_prompt tool that accepts:

  • raw_prompt (required): The user's raw, unpolished prompt

  • context (optional): Additional context about the task

  • mode (optional): Processing mode - "general" or "code" (default: "general")

  • temperature (optional): AI sampling temperature 0.0-1.0 (default: 0.2)

Example Tool Call

The tool is called directly with parameters:

# Direct function call result = await clean_prompt_tool( raw_prompt="help me write code", context="web development with Python", mode="code", temperature=0.1 )

Or via MCP protocol:

{ "method": "tools/call", "params": { "name": "clean_prompt", "arguments": { "raw_prompt": "help me write code", "context": "web development with Python", "mode": "code", "temperature": 0.1 } } }

Example Response

{ "cleaned": "Help me write Python code for web development. I need assistance with [specific task] using [framework/library]. The code should [requirements] and handle [error cases].", "notes": [ "Added placeholders for specific task and framework", "Specified requirements and error handling" ], "open_questions": [ "What specific web development task?", "Which Python framework?", "What are the exact requirements?" ], "risks": ["Without specific details, the code may not meet requirements"], "unchanged": false, "quality": { "score": 4, "reasons": ["Clear structure", "Identifies missing information", "Actionable guidance"] } }

MCP Client Configuration

Claude Desktop

For Local LLM (LMStudio) - No API Key Required

{ "mcpServers": { "mcp-prompt-cleaner": { "command": "python", "args": ["main.py"] } } }

For Cloud LLM (OpenAI, etc.) - API Key Required

{ "mcpServers": { "mcp-prompt-cleaner": { "command": "python", "args": ["main.py"], "env": { "LLM_API_KEY": "your-api-key-here", "LLM_API_ENDPOINT": "https://api.openai.com/v1/chat/completions", "LLM_MODEL": "gpt-4" } } } }

Other MCP Clients

The server uses stdio transport and can be configured with any MCP-compatible client by pointing to the main.py file.

Development

Running Tests

uv run pytest

Test Coverage

The project includes comprehensive tests for:

  • JSON extraction from mixed content

  • LLM client with retry logic

  • Prompt cleaning functionality

  • MCP protocol integration

Project Structure

├── main.py # MCP server with tool registration ├── config.py # Configuration management ├── schemas.py # Pydantic models for validation ├── tools/ │ └── cleaner.py # Main clean_prompt implementation ├── llm/ │ └── client.py # AI API client with retry logic ├── utils/ │ └── json_extractor.py # JSON extraction utilities ├── prompts/ │ └── cleaner.md # AI system prompt └── tests/ # Comprehensive test suite

Requirements

  • Python 3.11+

  • MCP Python SDK

  • httpx for HTTP client

  • pydantic for data validation

  • pytest for testing

License

MIT

-
security - not tested
A
license - permissive license
-
quality - not tested

hybrid server

The server is able to function both locally and remotely, depending on the configuration or use case.

Enhances and cleans raw prompts using AI to make them more clear, actionable, and effective. Provides quality assessment, suggestions, and supports both general and code-specific optimization modes.

  1. Features
    1. Installation
      1. Using uv (recommended)
      2. Using pip
    2. Configuration
      1. Local LLM (LMStudio) - Default Setup
      2. Cloud LLM (OpenAI, Anthropic, etc.)
      3. LMStudio Setup
    3. Running the Server
      1. Tool Usage
        1. Example Tool Call
        2. Example Response
      2. MCP Client Configuration
        1. Claude Desktop
        2. Other MCP Clients
      3. Development
        1. Running Tests
        2. Test Coverage
        3. Project Structure
      4. Requirements
        1. License

          MCP directory API

          We provide all the information about MCP servers via our MCP API.

          curl -X GET 'https://glama.ai/api/mcp/v1/servers/Da-Colon/mcp-py-prompt-cleaner'

          If you have feedback or need assistance with the MCP directory API, please join our Discord server