MCP Prompt Optimizer
An MCP server that automatically analyzes and optimizes AI prompts using the OTA (Optimize-Then-Answer) Framework
π― What It Does
This MCP server provides an optimize_prompt tool that:
π Analyzes prompts - Calculates clarity score (0-100%) and identifies domain
π Detects risks - Flags security, privacy, policy, safety, and compliance concerns
β Asks smart questions - Generates 1-3 targeted questions when clarity < 60%
β¨ Enhances prompts - Adds domain-specific requirements (tests for code, accessibility for UX, etc.)
π Provides structure - Returns optimized prompts ready for AI processing
π Quick Start
Installation
For Claude Code:
Add to
Restart your MCP client (Claude Code, Cursor, etc.)
Usage
Option 1: Use the MCP tool directly
Once installed, use the optimize_prompt tool:
Option 2: Use the
The /ori (Optimize-Research-Implement) command provides an autonomous workflow with intelligent multi-model selection:
This will: 0. Strategy (Opus) - Design optimal research plan and select best models
Research (Dynamic) - Automatically search docs, best practices, and codebase
Verify (Sonnet) - Cross-validate findings and check for risks
Implement (Sonnet/Haiku) - Apply changes with error handling
Document (Haiku) - Update README, CHANGELOG, and other docs
Multi-Model Benefits:
40% cost reduction vs. all-Opus
30% faster execution
Each model used in its optimal zone
See /ori command documentation for details.
Output:
After answering:
Output:
π Features
Domain Detection
Automatically identifies the domain of your request:
code - Programming, APIs, debugging
UX - UI design, interfaces, accessibility
data - Analytics, statistics, calculations
writing - Content, documentation, articles
research - Studies, investigations, analysis
finance - ROI, budgets, pricing
product - Features, roadmaps, strategy
Clarity Scoring
Calculates a 0-1 clarity score based on:
Factor | Weight | Measures |
Goal clarity | 30% | Is objective explicit and measurable? |
Context completeness | 25% | Are inputs/constraints provided? |
Format specification | 15% | Is output format defined? |
Success criteria | 20% | Are acceptance criteria stated? |
Technical detail | 10% | Stack, versions, specifics included? |
Risk Detection
Flags potential concerns:
security - auth, passwords, tokens, vulnerabilities
privacy - PII, email, phone, GDPR
policy - fake, bypass, illegal activities
safety - harm, dangerous content
compliance - medical/legal/financial advice
Smart Questions
When clarity < 60%, generates targeted questions:
Code domain:
What programming language or framework?
What specific feature/component?
Testing/security needs?
UX domain:
Who are the target users?
What platform (web/mobile)?
Data domain:
What's the data structure?
What specific metrics?
Domain-Specific Enhancement
Adds requirements based on domain:
Code:
UX:
Data:
π Examples
Example 1: Vague Request
Input:
Output:
Example 2: Clear Request with Security
Input:
Output:
Example 3: UX Request
Input:
Output:
π§ Configuration
Adjust Clarity Threshold
Edit src/index.ts:
Change Question Limit
In generateQuestions():
Add Custom Domain
Add to detectDomain():
Then add handling in generateQuestions() and createOptimizedPrompt().
ποΈ Development
Build
Watch Mode
Project Structure
π How It Works
The OTA (Optimize-Then-Answer) Loop
Keyword-Based Detection
The server uses keyword matching for:
Domain classification - Fast, deterministic
Clarity scoring - Heuristic-based
Risk detection - Pattern matching
Note: This is intentionally simple and fast. No ML models, no API calls, works offline.
π€ Contributing
Contributions welcome! Areas for improvement:
ML-based domain classification
Multi-language support
Learning from user feedback
Integration with custom knowledge bases
Automatic prompt rewriting (not just enhancement)
π License
MIT License - see LICENSE file for details
π Related
β Support
If this tool helps you get better AI responses, give it a star!
π Changelog
v1.1.0 (2025-11-08)
Added
/orislash command for autonomous research-implement workflowIntelligent multi-model selection (Opus β Sonnet β Haiku)
Phase 0: Opus creates research strategy
Phase 1: Dynamic model selection based on complexity
Phase 2-4: Optimized model per phase (40% cost savings)
Integrated OODA framework with OTA Loop in optimized_prompts.md
Added automatic web search and documentation research
Implemented error handling and rollback mechanisms
Added automatic documentation updates (README, CHANGELOG)
Created configurable workflow via
.claude/ori-config.json
v1.0.0 (2025-11-08)
Initial release
Domain detection (7 domains)
Clarity scoring (0-1 scale)
Risk detection (5 categories)
Smart question generation (max 3)
Domain-specific prompt enhancement
Made with β€οΈ for better AI interactions