Integrates with Google AI (Gemini) models to provide multi-agent thinking augmentation through parallel critique analysis using Gemini 2.5 Flash for critiques and Gemini 2.5 Pro for synthesis
Elrond MCP - Thinking Augmentation Server
A Model Context Protocol (MCP) server that provides hierarchical LLM critique and synthesis for enhanced decision-making and idea evaluation.
Warning
Preview Software: This is experimental software in active development and is not intended for production use. Features may change, break, or be removed without notice. Use at your own risk.
Overview
Elrond MCP implements a multi-agent thinking augmentation system that analyzes proposals through three specialized critique perspectives (positive, neutral, negative) and synthesizes them into comprehensive, actionable insights. This approach helps overcome single-model biases and provides more thorough analysis of complex ideas.
Features
- Parallel Critique Analysis: Three specialized agents analyze proposals simultaneously from different perspectives
- Structured Responses: Uses Pydantic models and
instructor
library for reliable, structured outputs - Google AI Integration: Leverages Gemini 2.5 Flash for critiques and Gemini 2.5 Pro for synthesis
- MCP Compliance: Full Model Context Protocol support for seamless integration with AI assistants
- Comprehensive Analysis: Covers feasibility, risks, benefits, implementation, stakeholder impact, and resource requirements
- Consensus Building: Identifies areas of agreement and disagreement across perspectives
Architecture
Installation
Prerequisites
- Python 3.13 or higher
- Google AI API key (get one at Google AI Studio)
Setup
- Clone the repository:
- Install dependencies:
- Configure API key:
Usage
Running the Server
Development Mode
Production Mode
Integration with Claude Desktop
- Install for Claude Desktop:
- Manual Configuration:
Add to your Claude Desktop MCP settings:
Using the Tools
Augment Thinking Tool
Analyze any proposal through multi-perspective critique:
Check System Status Tool
Monitor the health and configuration of the thinking augmentation system:
Response Structure
Critique Response
Each critique agent provides:
- Executive Summary: Brief overview of the perspective
- Structured Analysis:
- Feasibility assessment
- Risk identification
- Benefit analysis
- Implementation considerations
- Stakeholder impact
- Resource requirements
- Key Insights: 3-5 critical observations
- Confidence Level: Numerical confidence (0.0-1.0)
Synthesis Response
The synthesis agent provides:
- Executive Summary: High-level recommendation
- Consensus View:
- Areas of agreement
- Areas of disagreement
- Balanced assessment
- Critical considerations
- Recommendation: Overall guidance
- Next Steps: Concrete action items
- Uncertainty Flags: Areas needing more information
- Overall Confidence: Synthesis confidence level
Development
Project Structure
Running Tests
Code Formatting
Development Script
For convenience, use the provided script to run all quality checks:
This script will:
- Sync dependencies
- Run Ruff linter with auto-fix
- Format code with Ruff
- Execute the full test suite
- Perform final lint check
- Provide a pre-commit checklist
Configuration
Environment Variables
GEMINI_API_KEY
: Required Google AI API keyLOG_LEVEL
: Logging level (default: INFO)
Model Configuration
- Critique Agents:
gemini-2.5-flash
- Synthesis Agent:
gemini-2.5-pro
Models can be customized by modifying the agent initialization in agents.py
.
Troubleshooting
Common Issues
- API Key Not FoundSolution: Set the
GEMINI_API_KEY
environment variable - Empty Proposal ErrorSolution: Ensure your proposal is at least 10 characters long
- Model Rate LimitsSolution: Wait a moment and retry, or check your Google AI quota
- Validation ErrorsSolution: The LLM response didn't match expected structure. This is usually temporary - retry the request
Debugging
Enable debug logging:
Check system status:
Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests for new functionality
- Run the test suite
- Submit a pull request
License
See LICENSE
Support
For issues and questions:
- Check the troubleshooting section above
- Review the logs for detailed error information
- Open an issue on the repository
Roadmap
- Support for additional LLM providers (OpenAI, Anthropic)
- Custom critique perspectives and personas
- Performance optimization and caching
- Advanced synthesis algorithms
This server cannot be installed
remote-capable server
The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.
Enables enhanced decision-making through hierarchical LLM analysis, using three specialized critique agents (positive, neutral, negative) that analyze proposals in parallel and synthesize them into comprehensive, actionable insights. Helps overcome single-model biases by providing multi-perspective evaluation of complex ideas and proposals.