ClarifyPrompt MCP transforms vague prompts into platform-optimized versions for 58+ AI platforms across 7 categories.
Prompt Optimization
Convert raw prompts into platform-specific versions with correct syntax, parameters, and structure for platforms like Midjourney, DALL-E, Sora, ElevenLabs, Claude, ChatGPT, Cursor, and 50+ more
Auto-detect category and platform from prompt content when not specified
Choose from 7 output modes:
concise,detailed,structured,step-by-step,bullet-points,technical,simpleOptionally enrich context via web search (Tavily, Brave, Serper, SerpAPI, Exa, SearXNG)
Discovery Tools
List all 7 categories (image, video, chat, code, document, voice, music) with platform counts
List all platforms per category, including custom ones
List all output modes with descriptions
Custom Platform Management
Register new platforms with custom optimization instructions (inline or from
.mdfiles)Update custom platforms or add/override instructions and syntax hints on built-in platforms
Unregister custom platforms or clear instruction overrides from built-in platforms
Supported Platforms (58+)
🖼️ Image (10): Midjourney, DALL-E 3, Stable Diffusion, Flux, Ideogram, Leonardo AI, Adobe Firefly, and more
🎬 Video (11): Sora, Runway Gen-3, Pika Labs, Kling AI, Luma, HeyGen, and more
💬 Chat (9): Claude, ChatGPT, Gemini, Llama, DeepSeek, and more
💻 Code (9): Claude, ChatGPT, Cursor, GitHub Copilot, Windsurf, and more
📄 Document (8): Claude, ChatGPT, Jasper, Notion AI, Writesonic, and more
🔊 Voice (7): ElevenLabs, OpenAI TTS, Fish Audio, PlayHT, and more
🎵 Music (4): Suno AI, Udio, Stable Audio, MusicGen
Works with any OpenAI-compatible API, Anthropic API, or local models via Ollama, and integrates with Claude Desktop, Claude Code, and Cursor.
Uses Brave Search to provide web-based context enrichment for more accurate prompt optimization.
Generates platform-specific prompts for ElevenLabs voice AI, including appropriate parameters for audio generation.
Optimizes coding prompts to improve the quality of suggestions and code generation within GitHub Copilot.
Optimizes prompts for Google Gemini and utilizes the Gemini API for LLM-based prompt processing.
Refines prompts for Grammarly to improve the quality of AI-assisted writing and document editing.
Optimizes prompts for Notion AI to enhance content creation and document management within the Notion platform.
Supports local Ollama instances as an LLM provider for processing prompt optimizations using self-hosted models.
Optimizes prompts for OpenAI platforms like ChatGPT and DALL-E, and supports using the OpenAI API for processing.
Integrates with Perplexity as an LLM provider to power prompt optimization workflows.
Utilizes self-hosted SearXNG instances for context enrichment via web search during prompt optimization.
Transforms vague musical ideas into structured prompts optimized for Suno AI, including genre and tempo specifications.
ClarifyPrompt MCP
An MCP server that transforms vague prompts into platform-optimized prompts for 58+ AI platforms across 7 categories — with support for registering custom platforms and providing markdown instruction files.
Send a raw prompt. Get back a version specifically optimized for Midjourney, DALL-E, Sora, Runway, ElevenLabs, Claude, ChatGPT, or any of the 58+ supported platforms — with the right syntax, parameters, and structure each platform expects. Register your own platforms and provide custom optimization instructions via .md files.
How It Works
You write: "a dragon flying over a castle at sunset"
ClarifyPrompt returns (for Midjourney):
"a majestic dragon flying over a medieval castle at sunset
--ar 16:9 --v 6.1 --style raw --q 2 --chaos 30 --s 700"
ClarifyPrompt returns (for DALL-E):
"A majestic dragon flying over a castle at sunset. Size: 1024x1024"Same prompt, different platform, completely different output. ClarifyPrompt knows what each platform expects.
Quick Start
With Claude Desktop
Add to your claude_desktop_config.json:
{
"mcpServers": {
"clarifyprompt": {
"command": "npx",
"args": ["-y", "clarifyprompt-mcp"],
"env": {
"LLM_API_URL": "http://localhost:11434/v1",
"LLM_MODEL": "qwen2.5:7b"
}
}
}
}With Claude Code
claude mcp add clarifyprompt -- npx -y clarifyprompt-mcpSet the environment variables in your shell before launching:
export LLM_API_URL=http://localhost:11434/v1
export LLM_MODEL=qwen2.5:7bWith Cursor
Add to your .cursor/mcp.json:
{
"mcpServers": {
"clarifyprompt": {
"command": "npx",
"args": ["-y", "clarifyprompt-mcp"],
"env": {
"LLM_API_URL": "http://localhost:11434/v1",
"LLM_MODEL": "qwen2.5:7b"
}
}
}
}Supported Platforms (58+ built-in, unlimited custom)
Category | Platforms | Default |
Image (10) | Midjourney, DALL-E 3, Stable Diffusion, Flux, Ideogram, Leonardo AI, Adobe Firefly, Grok Aurora, Google Imagen 3, Recraft | Midjourney |
Video (11) | Sora, Runway Gen-3, Pika Labs, Kling AI, Luma, Minimax/Hailuo, Google Veo 2, Wan, HeyGen, Synthesia, CogVideoX | Runway |
Chat (9) | Claude, ChatGPT, Gemini, Llama, DeepSeek, Qwen, Kimi, GLM, Minimax | Claude |
Code (9) | Claude, ChatGPT, Cursor, GitHub Copilot, Windsurf, DeepSeek Coder, Qwen Coder, Codestral, Gemini | Claude |
Document (8) | Claude, ChatGPT, Gemini, Jasper, Copy.ai, Notion AI, Grammarly, Writesonic | Claude |
Voice (7) | ElevenLabs, OpenAI TTS, Fish Audio, Sesame, Google TTS, PlayHT, Kokoro | ElevenLabs |
Music (4) | Suno AI, Udio, Stable Audio, MusicGen | Suno |
Tools
optimize_prompt
The main tool. Optimizes a prompt for a specific AI platform.
{
"prompt": "a cat sitting on a windowsill",
"category": "image",
"platform": "midjourney",
"mode": "concise"
}All parameters except When category and platform are omitted, ClarifyPrompt auto-detects them from the prompt content.
Three calling modes:
Mode | Example |
Zero-config |
|
Category only |
|
Fully explicit |
|
Parameters:
Parameter | Required | Description |
| Yes | The prompt to optimize |
| No |
|
| No | Platform ID (e.g. |
| No | Output style: |
| No | Set |
Response:
{
"originalPrompt": "a dragon flying over a castle at sunset",
"optimizedPrompt": "a majestic dragon flying over a medieval castle at sunset --ar 16:9 --v 6.1 --style raw --q 2 --s 700",
"category": "image",
"platform": "midjourney",
"mode": "concise",
"detection": {
"autoDetected": true,
"detectedCategory": "image",
"detectedPlatform": "midjourney",
"confidence": "high"
},
"metadata": {
"model": "qwen2.5:14b-instruct-q4_K_M",
"processingTimeMs": 3911,
"strategy": "ImageStrategy"
}
}The detection field only appears when auto-detection was used. When category and platform are provided explicitly, detection is skipped.
list_categories
Lists all 7 categories with platform counts (built-in and custom) and defaults.
list_platforms
Lists available platforms for a given category, including custom registered platforms. Shows which is the default and whether custom instructions are configured.
list_modes
Lists all 7 output modes with descriptions.
register_platform
Register a new custom AI platform for prompt optimization.
{
"id": "my-llm",
"category": "chat",
"label": "My Custom LLM",
"description": "Internal fine-tuned model",
"syntax_hints": ["JSON mode", "max 2000 tokens"],
"instructions": "Always use structured output format",
"instructions_file": "my-llm.md"
}Parameter | Required | Description |
| Yes | Unique ID (lowercase, alphanumeric with hyphens) |
| Yes | Category this platform belongs to |
| Yes | Human-readable platform name |
| Yes | Short description |
| No | Platform-specific syntax hints |
| No | Inline optimization instructions |
| No | Path to a |
update_platform
Update a custom platform or add instruction overrides to a built-in platform.
For built-in platforms (e.g. Midjourney, Claude), you can add custom instructions and extra syntax hints without modifying the originals:
{
"id": "midjourney",
"category": "image",
"instructions": "Always use --v 6.1, prefer --style raw",
"syntax_hints_append": ["--no plants", "--tile for patterns"]
}For custom platforms, all fields can be updated.
unregister_platform
Remove a custom platform or clear instruction overrides from a built-in platform.
{
"id": "my-llm",
"category": "chat"
}For built-in platforms, use remove_override_only: true to clear your custom instructions without affecting the platform itself.
Custom Platforms & Instructions
ClarifyPrompt supports registering custom platforms and providing optimization instructions — similar to how .cursorrules or CLAUDE.md guide AI behavior.
How It Works
Register a custom platform via
register_platformProvide instructions inline or as a
.mdfileOptimize prompts targeting your custom platform — instructions are injected into the optimization pipeline
Instruction Files
Instructions can be provided as markdown files stored at ~/.clarifyprompt/instructions/:
~/.clarifyprompt/
config.json # custom platforms + overrides
instructions/
my-llm.md # instructions for custom platform
midjourney-overrides.md # extra instructions for built-in platformExample instruction file (my-llm.md):
# My Custom LLM Instructions
## Response Format
- Always output valid JSON
- Include a "reasoning" field before the answer
## Constraints
- Max 2000 tokens
- Temperature should be set low (0.1-0.3) for factual queries
## Style
- Be concise and technical
- Avoid filler phrasesOverride Built-in Platforms
You can add custom instructions to any of the 58 built-in platforms using update_platform. This lets you customize how prompts are optimized for platforms like Midjourney, Claude, or Sora without modifying the defaults.
Config Directory
The config directory defaults to ~/.clarifyprompt/ and can be changed via the CLARIFYPROMPT_CONFIG_DIR environment variable. Custom platforms and overrides persist across server restarts.
LLM Configuration
ClarifyPrompt uses an LLM to optimize prompts. It works with any OpenAI-compatible API and with the Anthropic API directly.
Environment Variables
Variable | Required | Description |
| Yes | API endpoint URL |
| Depends | API key (not needed for local Ollama) |
| Yes | Model name/ID |
Provider Examples
Ollama (local, free):
LLM_API_URL=http://localhost:11434/v1
LLM_MODEL=qwen2.5:7bOpenAI:
LLM_API_URL=https://api.openai.com/v1
LLM_API_KEY=sk-...
LLM_MODEL=gpt-4oAnthropic Claude:
LLM_API_URL=https://api.anthropic.com/v1
LLM_API_KEY=sk-ant-...
LLM_MODEL=claude-sonnet-4-20250514Google Gemini:
LLM_API_URL=https://generativelanguage.googleapis.com/v1beta/openai
LLM_API_KEY=your-gemini-key
LLM_MODEL=gemini-2.0-flashGroq:
LLM_API_URL=https://api.groq.com/openai/v1
LLM_API_KEY=gsk_...
LLM_MODEL=llama-3.3-70b-versatileDeepSeek:
LLM_API_URL=https://api.deepseek.com/v1
LLM_API_KEY=your-deepseek-key
LLM_MODEL=deepseek-chatOpenRouter (any model):
LLM_API_URL=https://openrouter.ai/api/v1
LLM_API_KEY=your-openrouter-key
LLM_MODEL=anthropic/claude-sonnet-4See .env.example for the full list of 20+ supported providers including Together AI, Fireworks, Mistral, xAI, Cohere, Perplexity, LM Studio, vLLM, LocalAI, Jan, GPT4All, and more.
Web Search (Optional)
Enable context enrichment by setting enrich_context: true in your optimize_prompt call. ClarifyPrompt will search the web for relevant context before optimizing.
Supported search providers:
Provider | Variable | URL |
Tavily (default) |
| |
Brave Search |
| |
Serper |
| |
SerpAPI |
| |
Exa |
| |
SearXNG (self-hosted) | — |
SEARCH_PROVIDER=tavily
SEARCH_API_KEY=your-keyBefore and After
Image (Midjourney)
Before: "a cat sitting on a windowsill"
After: "a tabby cat sitting on a sunlit windowsill, warm golden hour
lighting, shallow depth of field, dust particles in light beams,
cozy interior background, shot on 35mm film, warm amber color
palette --ar 16:9 --v 6.1 --style raw --q 2"Video (Sora)
Before: "a timelapse of a city"
After: "Cinematic timelapse of a sprawling metropolitan skyline
transitioning from golden hour to blue hour to full night.
Camera slowly dollies forward from an elevated vantage point.
Light trails from traffic appear as the city illuminates.
Clouds move rapidly overhead. Duration: 10s.
Style: documentary cinematography, 4K."Code (Claude)
Before: "write a function to validate emails"
After: "Write a TypeScript function `validateEmail(input: string): boolean`
that validates email addresses against RFC 5322. Handle edge cases:
quoted local parts, IP address domains, internationalized domain
names. Return boolean, no exceptions. Include JSDoc with examples
of valid and invalid inputs. No external dependencies."Music (Suno)
Before: "compose a chill lo-fi beat for studying"
After: "Compose an instrumental chill lo-fi beat for studying.
[Tempo: medium] [Genre: lo-fi] [Length: 2 minutes]"Architecture
clarifyprompt-mcp/
src/
index.ts MCP server entry point (7 tools, 1 resource)
engine/
config/
categories.ts 7 categories, 58 platforms, 7 modes
persistence.ts ConfigStore — JSON config + .md file loading
registry.ts PlatformRegistry — merges built-in + custom platforms
llm/client.ts Multi-provider LLM client (OpenAI + Anthropic)
search/client.ts Web search (6 providers)
optimization/
engine.ts Core orchestrator + auto-detection
types.ts TypeScript interfaces
strategies/
base.ts Abstract base strategy
chat.ts 9 platforms
image.ts 10 platforms
video.ts 11 platforms
voice.ts 7 platforms
music.ts 4 platforms
code.ts 9 platforms
document.ts 8 platformsDocker
docker build -t clarifyprompt-mcp .
docker run -e LLM_API_URL=http://host.docker.internal:11434/v1 -e LLM_MODEL=qwen2.5:7b clarifyprompt-mcpDevelopment
git clone https://github.com/LumabyteCo/clarifyprompt-mcp.git
cd clarifyprompt-mcp
npm install
npm run buildTest with MCP Inspector:
npx @modelcontextprotocol/inspector node dist/index.jsSet environment variables in the Inspector's "Environment Variables" section before connecting.