Skip to main content
Glama

Prompt Auto-Optimizer MCP

by sloth-wq
API.md9.31 kB
# GEPA MCP Server API Documentation ## Overview The GEPA (Genetic Evolutionary Prompt Adaptation) MCP Server provides powerful tools for genetic prompt evolution, trajectory recording, reflection-based improvements, and multi-objective optimization using Pareto frontiers. ## API Design Principles ### 1. **Evolutionary Focus** - All operations center around prompt evolution and optimization - Support for genetic algorithms with mutation, crossover, and selection - Multi-generational tracking and improvement ### 2. **Data-Driven Insights** - Comprehensive trajectory recording for every execution - Reflection-based failure analysis and improvement suggestions - Performance metrics and scoring systems ### 3. **Multi-Objective Optimization** - Pareto frontier-based candidate selection - Balanced optimization between performance and diversity - Configurable objective weights and strategies ### 4. **Resilience and Recovery** - Built-in disaster recovery and backup systems - Data integrity validation and automatic repair - Component-level recovery capabilities ## Authentication **Note**: The GEPA MCP Server currently operates as a local service without authentication. All tools are available through the MCP protocol without additional credentials. ## Request/Response Formats ### Standard Response Structure All GEPA tools return responses in the following format: ```typescript interface ToolResponse { content: Array<{ type: 'text' | 'image'; text?: string; data?: string; mimeType?: string; }>; isError?: boolean; } ``` ### Success Response Example ```json { "content": [ { "type": "text", "text": "# Operation Successful\n\nDetails about the completed operation..." } ] } ``` ### Error Response Example ```json { "content": [ { "type": "text", "text": "Error executing operation: Invalid parameter value" } ], "isError": true } ``` ## Error Handling Standards ### Error Categories 1. **Validation Errors** (400-level) - Invalid parameters - Missing required fields - Type mismatches 2. **Processing Errors** (500-level) - Component initialization failures - File system errors - LLM adapter failures 3. **Resource Errors** (503-level) - Memory exhaustion - Disk space issues - Component unavailability ### Error Response Format All errors include: - **Error message**: Human-readable description - **Error code**: Machine-readable identifier (when applicable) - **Context**: Additional details about the failure - **Suggestions**: Recommended actions to resolve the issue Example error response: ``` Error executing gepa_start_evolution: taskDescription is required Suggestion: Provide a clear task description that explains what you want to optimize prompts for. ``` ## Rate Limiting **Current Implementation**: No rate limiting is enforced at the MCP server level. However, the following limits exist: - **Concurrent LLM Processes**: 3 (configurable) - **Process Timeout**: 30 seconds (configurable) - **Max Prompt Length**: 4000 characters - **Population Size**: Recommended maximum of 50 candidates - **Rollout Count**: Recommended maximum of 20 per evaluation ## Core Concepts ### 1. Evolution Process The evolution process follows these stages: 1. **Initialization** (`gepa_start_evolution`) - Define task description and objectives - Set evolution parameters (population size, generations, mutation rate) - Optionally provide seed prompt 2. **Evaluation** (`gepa_evaluate_prompt`) - Test prompt candidates across multiple tasks - Record performance metrics and scores - Update Pareto frontier with results 3. **Reflection** (`gepa_reflect`) - Analyze execution trajectories for failure patterns - Generate improvement suggestions - Guide next generation mutations 4. **Selection** (`gepa_select_optimal`) - Choose best candidates from Pareto frontier - Balance performance vs. diversity objectives - Apply selection pressure for evolution ### 2. Trajectory Recording Every prompt execution is recorded as a trajectory containing: - **Execution Steps**: Detailed action log with timestamps - **Performance Metrics**: Success rate, scores, token usage - **LLM Interactions**: Model calls, responses, latency - **Tool Usage**: Tool calls and their outcomes - **Final Results**: Success status and output quality ### 3. Pareto Frontier Optimization The system maintains a Pareto frontier of optimal prompt candidates by: - **Multi-objective evaluation**: Performance vs. diversity scoring - **Non-dominated sorting**: Identifying Pareto-optimal solutions - **Dynamic sampling**: UCB-based candidate selection - **Archive management**: Maintaining frontier size limits ### 4. Reflection Engine Automated failure analysis provides: - **Pattern Recognition**: Common failure modes and bottlenecks - **Root Cause Analysis**: Identifying specific improvement areas - **Actionable Suggestions**: Concrete prompt modifications - **Confidence Scoring**: Reliability of improvement recommendations ## Quick Start Guide ### 1. Initialize Evolution ```typescript // Start evolution for a code generation task const response = await mcpClient.callTool('gepa_start_evolution', { taskDescription: 'Generate TypeScript interfaces from natural language descriptions', seedPrompt: 'Create a TypeScript interface that represents the following concept:', config: { populationSize: 20, generations: 10, mutationRate: 0.15 } }); ``` ### 2. Record Execution Results ```typescript // Record a trajectory after testing a prompt await mcpClient.callTool('gepa_record_trajectory', { promptId: 'seed_evolution_123', taskId: 'typescript_interface_task', executionSteps: [ { stepNumber: 1, action: 'analyze_input', reasoning: 'Parse natural language description', timestamp: new Date(), success: true } ], result: { success: true, score: 0.85, output: { generatedInterface: '...' } } }); ``` ### 3. Get Optimal Candidates ```typescript // Retrieve best performing prompts const frontier = await mcpClient.callTool('gepa_get_pareto_frontier', { minPerformance: 0.7, limit: 5 }); // Select optimal prompt for specific context const optimal = await mcpClient.callTool('gepa_select_optimal', { taskContext: 'API endpoint generation', performanceWeight: 0.8, diversityWeight: 0.2 }); ``` ## Integration Examples See the [Integration Guide](./INTEGRATION.md) for detailed examples in multiple programming languages and frameworks. ## Best Practices ### 1. Evolution Strategy - Start with clear, specific task descriptions - Use representative seed prompts when available - Begin with smaller populations (10-20) for faster iteration - Gradually increase generations based on convergence patterns ### 2. Trajectory Recording - Record all execution attempts, including failures - Include rich context in execution steps - Provide accurate performance scores (0.0-1.0 scale) - Add metadata for debugging and analysis ### 3. Performance Optimization - Use parallel evaluation when possible - Monitor memory usage during large evolutions - Leverage caching for repeated evaluations - Implement circuit breakers for external dependencies ### 4. Error Handling - Always validate parameters before tool calls - Implement retry logic with exponential backoff - Monitor system health and component status - Use backup/restore capabilities for critical operations ## Advanced Features ### Disaster Recovery - **Automated Backups**: Regular system state snapshots - **Component Recovery**: Individual component restart and repair - **Data Integrity**: Validation and automatic corruption repair - **Health Monitoring**: Real-time system status and alerts ### Performance Monitoring - **Memory Management**: Automatic leak detection and optimization - **GC Optimization**: Intelligent garbage collection strategies - **Performance Benchmarks**: Built-in benchmarking and profiling - **Bottleneck Analysis**: Automated performance issue identification ### Neural Pattern Learning - **Adaptive Mutations**: Learning-based mutation strategies - **Pattern Recognition**: Automatic detection of successful patterns - **Meta-Learning**: Cross-domain knowledge transfer - **Cognitive Models**: Integration with cognitive pattern frameworks ## API Reference For detailed tool specifications, parameters, and examples, see: - [Core Tools](./api/core-tools.md) - Evolution, trajectory, and reflection tools - [Optimization Tools](./api/optimization-tools.md) - Pareto frontier and selection tools - [Recovery Tools](./api/recovery-tools.md) - Backup, restore, and recovery tools - [Legacy Tools](./api/legacy-tools.md) - Backward compatibility tools ## OpenAPI Specification The complete OpenAPI 3.0 specification is available at: - [OpenAPI Specification](./api/openapi.yaml) - [Interactive API Explorer](./api/swagger-ui.html) ## Support and Resources - **Documentation**: [Full Documentation](../README.md) - **Examples**: [Example Repository](./examples/) - **Issues**: [GitHub Issues](https://github.com/gepa-team/gepa-mcp-server/issues) - **Discussions**: [GitHub Discussions](https://github.com/gepa-team/gepa-mcp-server/discussions)

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/sloth-wq/prompt-auto-optimizer-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server