Utilizes OpenAI's API to generate vector embeddings for prompts, powering RAG-based retrieval and performance-history learning.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Prompt Learning MCP Serveroptimize my prompt for summarizing research papers using past performance"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Prompt Learning MCP Server
Stateful prompt optimization that learns over time.
An MCP (Model Context Protocol) server that optimizes your prompts using research-backed techniques (APE, OPRO, DSPy patterns) and learns from performance history via embedding-based retrieval.
Features
π§ Smart Optimization: Uses actual LLM-based evaluation, not heuristics
π Learns Over Time: Stores prompt performance in vector database
π RAG-Powered: Retrieves similar high-performing prompts
β‘ Pattern-Based Quick Wins: Instant improvements without API calls
π Analytics: Track what's working across domains
Quick Install
Or manually:
Requirements
Node.js 18+
Docker (for Qdrant and Redis)
OpenAI API key (for embeddings)
Usage
Once installed, use these tools in Claude Code:
optimize_prompt
Optimize a prompt using pattern-based and RAG-based techniques:
Returns the optimized prompt with improvement details.
retrieve_prompts
Find similar high-performing prompts:
record_feedback
Record how a prompt performed (enables learning):
suggest_improvements
Get quick suggestions without full optimization:
get_analytics
View performance trends:
How It Works
Cold Start (No History)
Pattern-based improvements: Adds structure, chain-of-thought, constraints
OPRO-style iteration: LLM generates candidates, evaluates, selects best
APE-style generation: Creates multiple instruction variants
Warm Start (With History)
Embed the prompt: Creates vector representation
Retrieve similar: Finds high-performing prompts from database
Learn from winners: Synthesizes improvements from what worked
Iterate with feedback: Uses evaluation to guide optimization
Evaluation
All prompts are scored by an LLM evaluator on:
Clarity (25%): How unambiguous
Specificity (25%): Appropriate guidance level
Completeness (20%): Covers all requirements
Structure (15%): Well-organized
Effectiveness (15%): Likely to produce desired output
Architecture
Configuration
Claude Code Config (~/.claude.json)
Environment Variables
Variable | Default | Description |
|
| Qdrant server URL |
|
| Redis server URL |
| (required) | For embeddings |
Development
Troubleshooting
MCP Server Not Starting
Check Docker containers are running:
Vector DB Connection Failed
No Improvements Seen
Ensure OPENAI_API_KEY is set correctly
Check Claude Code logs:
~/.claude/logs/mcp.logTry with a simple prompt first
Research Foundation
This server implements techniques from:
APE (Zhou et al., 2022): Automatic Prompt Engineer
OPRO (Yang et al., 2023): Optimization by Prompting
DSPy (Khattab et al., 2023): Programmatic prompt optimization
Contextual Retrieval (Anthropic, 2024): Enhanced embedding retrieval
License
MIT
Links
Documentation: https://www.someclaudeskills.com/skills/automatic-stateful-prompt-improver
Skill Definition: Part of the Some Claude Skills collection
Issues: https://github.com/erichowens/prompt-learning-mcp/issues