Integrates with OpenAI-compatible APIs to provide prompt cleaning and sanitization services, using LLM models to retouch prompts, identify risks, redact sensitive information, and provide structured feedback on prompt quality.
Prompt Cleaner (MCP Server)
TypeScript MCP server exposing a prompt cleaning tool and health checks. All prompts route through cleaner
, with secret redaction, structured schemas, and client-friendly output normalization.
Features
- Tools
health-ping
: liveness probe returning{ ok: true }
.cleaner
: clean a raw prompt; returns structured JSON with retouched string, notes, openQuestions, risks, and redactions.
- Secret redaction: Sensitive patterns are scrubbed from logs and outputs in
src/redact.ts
. - Output normalization:
src/server.ts
converts content withtype: "json"
to plain text for clients that reject JSON content types. - Configurable: LLM base URL, API key, model, timeout, log level; optional local-only enforcement.
- Deterministic model policy: Single model via
LLM_MODEL
; no dynamic model selection/listing by default.
Requirements
- Node.js >= 20
Install & Build
Run
- Dev (stdio server):
Inspector (Debugging)
Use the MCP Inspector to exercise tools over stdio:
Environment
Configure via .env
or environment variables:
LLM_API_BASE
(string, defaulthttp://localhost:1234/v1
): OpenAI-compatible base URL.LLM_API_KEY
(string, optional): Bearer token for the API.LLM_MODEL
(string, defaultopen/ai-gpt-oss-20b
): Model identifier sent to the API.LLM_TIMEOUT_MS
(number, default60000
): Request timeout.LOG_LEVEL
(error|warn|info|debug
, defaultinfo
): Log verbosity (logs JSON to stderr).ENFORCE_LOCAL_API
(true|false
, defaultfalse
): Iftrue
, only allow localhost APIs.LLM_MAX_RETRIES
(number, default1
): Retry count for retryable HTTP/network errors.RETOUCH_CONTENT_MAX_RETRIES
(number, default1
): Retries when the cleaner returns non-JSON content.LLM_BACKOFF_MS
(number, default250
): Initial backoff delay in milliseconds.LLM_BACKOFF_JITTER
(0..1, default0.2
): Jitter factor applied to backoff.
Example .env
:
Tools (API Contracts)
All tools follow MCP Tool semantics. Content is returned as [{ type: "json", json: <payload> }]
and normalized to type: "text"
by the server for clients that require it.
- health-ping
- Input:
{}
- Output:
{ ok: true }
- Input:
- cleaner
- Input:
{ prompt: string, mode?: "code"|"general", temperature?: number }
- Output:
{ retouched: string, notes?: string[], openQuestions?: string[], risks?: string[], redactions?: ["[REDACTED]"][] }
- Behavior: Applies a system prompt from
prompts/cleaner.md
, calls the configured LLM, extracts first JSON object, validates with Zod, and redacts secrets.
- Input:
- sanitize-text (alias of
cleaner
)- Same input/output schema and behavior as
cleaner
. Exposed for agents that keyword-match on “sanitize”, “PII”, or “redact”.
- Same input/output schema and behavior as
- normalize-prompt (alias of
cleaner
)- Same input/output schema and behavior as
cleaner
. Exposed for agents that keyword-match on “normalize”, “format”, or “preprocess”.
- Same input/output schema and behavior as
Per-call API key override
src/llm.ts
accepts apiKey
in options for per-call overrides; falls back to LLM_API_KEY
.
Project Structure
src/server.ts
: MCP server wiring, tool listing/calls, output normalization, logging.src/tools.ts
: Tool registry and dispatch.src/cleaner.ts
: Cleaner pipeline and JSON extraction/validation.src/llm.ts
: LLM client with timeout, retry, and error normalization.src/redact.ts
: Secret redaction utilities.src/config.ts
: Environment configuration and validation.test/*.test.ts
: Vitest suite covering tools, shapes, cleaner, and health.
Testing
Design decisions
- Single-model policy: Uses
LLM_MODEL
from environment; no model listing/selection tool to keep behavior deterministic and reduce surface area. - Output normalization:
src/server.ts
convertsjson
content totext
for clients that reject JSON. - Secret redaction:
src/redact.ts
scrubs sensitive tokens from logs and outputs.
Troubleshooting
- LLM timeout: Increase
LLM_TIMEOUT_MS
; check network reachability toLLM_API_BASE
. - Non-JSON from cleaner: Retries up to
RETOUCH_CONTENT_MAX_RETRIES
. If persistent, reducetemperature
or ensure the configured model adheres to the output contract. - HTTP 5xx from LLM: Automatic retries up to
LLM_MAX_RETRIES
with exponential backoff (LLM_BACKOFF_MS
,LLM_BACKOFF_JITTER
). - Local API enforcement error: If
ENFORCE_LOCAL_API=true
,LLM_API_BASE
must point to localhost. - Secrets in logs/outputs: Redaction runs automatically; if you see leaked tokens, update patterns in
src/redact.ts
.
Windsurf (example)
Add an MCP server in Windsurf settings, pointing to the built stdio server:
Usage:
- In a chat, ask the agent to use
cleaner
with your raw prompt. - Or invoke tools from the agent UI if exposed by your setup.
LLM API compatibility
- Works with OpenAI-compatible Chat Completions APIs (e.g., LM Studio local server) that expose
/v1/chat/completions
. - Configure via
LLM_API_BASE
and optionalLLM_API_KEY
. UseENFORCE_LOCAL_API=true
to restrict to localhost for development. - Set
LLM_MODEL
to the provider-specific model identifier. This server follows a single-model policy for determinism and reproducibility. - Providers must return valid JSON; the cleaner includes limited retries when content is not strictly JSON.
Links
- Model Context Protocol (spec): https://modelcontextprotocol.io
- Cleaner system prompt:
prompts/cleaner.md
Notes
- Logs are emitted to stderr as JSON lines to avoid interfering with MCP stdio.
- Some clients reject
json
content types; this server normalizes them totext
automatically.
Security
- Secrets are scrubbed by
src/redact.ts
from logs and cleaner outputs. ENFORCE_LOCAL_API=true
restricts usage to local API endpoints.
hybrid server
The server is able to function both locally and remotely, depending on the configuration or use case.
Enables cleaning and sanitizing prompts through an LLM-powered tool that removes sensitive information, provides structured feedback with notes and risks, and normalizes prompt formatting. Supports configurable local or remote OpenAI-compatible APIs with automatic secret redaction.
- Features
- Requirements
- Install & Build
- Run
- Inspector (Debugging)
- Environment
- Tools (API Contracts)
- Per-call API key override
- Project Structure
- Testing
- Design decisions
- Troubleshooting
- Windsurf (example)
- LLM API compatibility
- Links
- Notes
- Security
Related Resources
Related MCP Servers
- AsecurityAlicenseAqualityEnables integration of Perplexity's AI API with LLMs, delivering advanced chat completion by utilizing specialized prompt templates for tasks like technical documentation, code review, and API documentation.Last updated -16448JavaScriptMIT License
- AsecurityFlicenseAqualityEnables creation, management, and templating of prompts through a simplified SOLID architecture, allowing users to organize prompts by category and fill in templates at runtime.Last updated -6066TypeScript
- -securityAlicense-qualityServes prompt templates through a standardized protocol for transforming basic user queries into optimized prompts for AI systems.Last updated -6PythonApache 2.0
- -securityFlicense-qualityIntelligently analyzes codebases to enhance LLM prompts with relevant context, featuring adaptive context management and task detection to produce higher quality AI responses.Last updated -TypeScript