Enables prompt refinement and optimization using Google's Gemini models (gemini-2.0-flash, gemini-1.5-pro) for generating and analyzing prompts through adaptive questioning.
Enables prompt refinement and optimization using OpenAI's models (gpt-4o, gpt-4-turbo) for generating and analyzing prompts through adaptive questioning.
Promptheus
Refine and optimize prompts for LLMs
Quick Start
What is Promptheus?
Promptheus analyzes your prompts and refines them with:
Adaptive questioning: Smart detection of what information you need to provide
Multi-provider support: Works with Google, OpenAI, Anthropic, Groq, Qwen, and more
Interactive refinement: Iteratively improve outputs through natural conversation
Session history: Automatically track and reuse past prompts
CLI and Web UI: Use from terminal or browser
Supported Providers
Provider | Models | Setup |
Google Gemini | gemini-2.0-flash, gemini-1.5-pro | |
Anthropic Claude | claude-3-5-sonnet, claude-3-opus | |
OpenAI | gpt-4o, gpt-4-turbo | |
Groq | llama-3.3-70b, mixtral-8x7b | |
Alibaba Qwen | qwen-max, qwen-plus | |
Zhipu GLM | glm-4-plus, glm-4-air | |
OpenRouter | openrouter/auto (auto-routing) |
OpenRouter integration in Promptheus is optimized around the openrouter/auto routing model:
Model listing is intentionally minimal: Promptheus does not expose your full OpenRouter account catalog.
You can still specify a concrete model manually with
OPENROUTER_MODELor--modelif your key has access.
Core Features
🧠 Adaptive Task Detection Automatically detects whether your task needs refinement or direct optimization
⚡ Interactive Refinement Ask targeted questions to elicit requirements and improve outputs
📝 Pipeline Integration Works seamlessly in Unix pipelines and shell scripts
🔄 Session Management Track, load, and reuse past prompts automatically
📊 Telemetry & Analytics Anonymous usage and performance metrics tracking for insights (local storage only, can be disabled)
🌐 Web Interface Beautiful UI for interactive prompt refinement and history management
Configuration
Create a .env file with at least one provider API key:
Or run the interactive setup:
Examples
Content Generation
Code Analysis
Interactive Session
Pipeline Integration
Testing & Examples: See sample_prompts.md for test prompts demonstrating adaptive task detection (analysis vs generation).
Telemetry & Analytics
MCP Server
Promptheus includes a Model Context Protocol (MCP) server that exposes prompt refinement capabilities as standardized tools for integration with MCP-compatible clients.
What the MCP Server Does
The Promptheus MCP server provides:
Prompt refinement with Q&A: Intelligent prompt optimization through adaptive questioning
Prompt tweaking: Surgical modifications to existing prompts
Model/provider inspection: Discovery and validation of available AI providers
Environment validation: Configuration checking and connectivity testing
Starting the MCP Server
Prerequisites:
MCP package installed:
pip install mcp(included in requirements.txt)At least one provider API key configured (see Configuration)
Available MCP Tools
refine_prompt
Intelligent prompt refinement with optional clarification questions.
Inputs:
prompt(required): The initial prompt to refineanswers(optional): Dictionary mapping question IDs to answers{q0: "answer", q1: "answer"}answer_mapping(optional): Maps question IDs to original question textprovider(optional): Override provider (e.g., "google", "openai")model(optional): Override model name
Response Types:
{"type": "refined", "prompt": "...", "next_action": "..."}: Success with refined prompt{"type": "clarification_needed", "questions_for_ask_user_question": [...], "answer_mapping": {...}}: Questions needed{"type": "error", "error_type": "...", "message": "..."}: Error occurred
tweak_prompt
Apply targeted modifications to existing prompts.
Inputs:
prompt(required): Current prompt to modifymodification(required): Description of changes (e.g., "make it shorter")provider,model(optional): Provider/model overrides
Returns:
{"type": "refined", "prompt": "..."}: Modified prompt
list_models
Discover available models from configured providers.
Inputs:
providers(optional): List of provider names to querylimit(optional): Max models per provider (default: 20)include_nontext(optional): Include vision/embedding models
Returns:
{"type": "success", "providers": {"google": {"available": true, "models": [...]}}}
list_providers
Check provider configuration status.
Returns:
{"type": "success", "providers": {"google": {"configured": true, "model": "..."}}}
validate_environment
Test environment configuration and API connectivity.
Inputs:
providers(optional): Specific providers to validatetest_connection(optional): Test actual API connectivity
Returns:
{"type": "success", "validation": {"google": {"configured": true, "connection_test": "passed"}}}
Prompt Refinement Workflow with Q&A
The MCP server supports a structured clarification workflow for optimal prompt refinement:
Step 1: Initial Refinement Request
Step 2: Handle Clarification Response
Step 3: Collect User Answers
Use your MCP client's AskUserQuestion tool with the provided questions, then map answers to question IDs.
Step 4: Final Refinement with Answers
Response:
AskUser Integration Contract
The MCP server operates in two modes:
Interactive Mode (when AskUserQuestion is available):
Automatically asks clarification questions via injected AskUserQuestion function
Returns refined prompt immediately after collecting answers
Seamless user experience within supported clients
Structured Mode (fallback for all clients):
Returns
clarification_neededresponse with formatted questionsClient responsible for calling AskUserQuestion tool
Answers mapped back via
answer_mappingdictionary
Question Format:
Each question in questions_for_ask_user_question includes:
question: The question text to displayheader: Short identifier (Q1, Q2, etc.)multiSelect: Boolean for multi-select optionsoptions: Array of{label, description}for radio/checkbox questions
Answer Mapping:
Question IDs follow pattern:
q0,q1,q2, etc.Answers dictionary uses these IDs as keys:
{"q0": "answer", "q1": "answer"}answer_mappingpreserves original question text for provider context
Troubleshooting MCP
MCP Package Not Installed
Fix: pip install mcp or install Promptheus with dev dependencies: pip install -e .[dev]
Missing Provider API Keys
Diagnosis: Use list_providers or validate_environment tools to check configuration status
Provider Misconfiguration
Fix: Set missing API keys in .env file or environment variables
Connection Test Failures
Fix: Verify API keys are valid and have necessary permissions
Full Documentation
Quick reference: promptheus --help
Comprehensive guides:
Development
See CLAUDE.md for detailed development guidance.
License
MIT License - see LICENSE for details
Contributing
Contributions welcome! Please see our development guide for contribution guidelines.
Questions? Open an issue | Live demo: promptheus web