README.md•9.1 kB
# Elrond MCP - Thinking Augmentation Server
A Model Context Protocol (MCP) server that provides hierarchical LLM critique and synthesis for enhanced decision-making and idea evaluation.
> [!WARNING]
> **Preview Software**: This is experimental software in active development and is not intended for production use. Features may change, break, or be removed without notice. Use at your own risk.
## Overview
Elrond MCP implements a multi-agent thinking augmentation system that analyzes proposals through three specialized critique perspectives (positive, neutral, negative) and synthesizes them into comprehensive, actionable insights. This approach helps overcome single-model biases and provides more thorough analysis of complex ideas.
## Features
- **Parallel Critique Analysis**: Three specialized agents analyze proposals simultaneously from different perspectives
- **Structured Responses**: Uses Pydantic models and `instructor` library for reliable, structured outputs
- **Google AI Integration**: Leverages Gemini 2.5 Flash for critiques and Gemini 2.5 Pro for synthesis
- **MCP Compliance**: Full Model Context Protocol support for seamless integration with AI assistants
- **Comprehensive Analysis**: Covers feasibility, risks, benefits, implementation, stakeholder impact, and resource requirements
- **Consensus Building**: Identifies areas of agreement and disagreement across perspectives
## Architecture
```
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Positive │ │ Neutral │ │ Negative │
│ Critique │ │ Critique │ │ Critique │
│ Agent │ │ Agent │ │ Agent │
│ │ │ │ │ │
│ Gemini 2.5 │ │ Gemini 2.5 │ │ Gemini 2.5 │
│ Flash │ │ Flash │ │ Flash │
└─────────┬───────┘ └─────────┬───────┘ └─────────┬───────┘
│ │ │
│ │ │
└──────────────────────┼──────────────────────┘
│
▼
┌─────────────────────────┐
│ Synthesis Agent │
│ │
│ Gemini 2.5 Pro │
│ │
│ │
│ Consensus + Summary │
└─────────────────────────┘
```
## Installation
### Prerequisites
- Python 3.13 or higher
- Google AI API key (get one at [Google AI Studio](https://aistudio.google.com/))
### Setup
1. **Clone the repository:**
```bash
git clone <repository-url>
cd elrond-mcp
```
2. **Install dependencies:**
```bash
# Using uv (recommended)
uv sync --dev --all-extras
# Or using pip
pip install -e .[dev]
```
3. **Configure API key:**
```bash
export GEMINI_API_KEY="your-gemini-api-key-here"
# Or create a .env file
echo "GEMINI_API_KEY=your-gemini-api-key-here" > .env
```
## Usage
### Running the Server
#### Development Mode
```bash
# Using uv
uv run python main.py
# Using MCP CLI (if installed)
mcp dev elrond_mcp/server.py
```
#### Production Mode
```bash
# Direct execution
python main.py
# Or via package entry point
elrond-mcp
```
### Integration with Claude Desktop
1. **Install for Claude Desktop:**
```bash
mcp install elrond_mcp/server.py --name "Elrond Thinking Augmentation"
```
2. **Manual Configuration:**
Add to your Claude Desktop MCP settings:
```json
{
"elrond-mcp": {
"command": "python",
"args": ["/path/to/elrond-mcp/main.py"],
"env": {
"GEMINI_API_KEY": "your-api-key-here"
}
}
}
```
### Using the Tools
#### Augment Thinking Tool
Analyze any proposal through multi-perspective critique:
```
Use the "consult_the_council" tool with this proposal:
# Project Alpha: AI-Powered Customer Service
## Overview
Implement an AI chatbot to handle 80% of customer service inquiries, reducing response time from 2 hours to 30 seconds.
## Goals
- Reduce operational costs by 40%
- Improve customer satisfaction scores
- Free up human agents for complex issues
## Implementation
- Deploy GPT-4 based chatbot
- Integrate with existing CRM
- 3-month rollout plan
- $200K initial investment
```
#### Check System Status Tool
Monitor the health and configuration of the thinking augmentation system:
```
Use the "check_system_status" tool to verify:
- API key configuration
- Model availability
- System health
```
## Response Structure
### Critique Response
Each critique agent provides:
- **Executive Summary**: Brief overview of the perspective
- **Structured Analysis**:
- Feasibility assessment
- Risk identification
- Benefit analysis
- Implementation considerations
- Stakeholder impact
- Resource requirements
- **Key Insights**: 3-5 critical observations
- **Confidence Level**: Numerical confidence (0.0-1.0)
### Synthesis Response
The synthesis agent provides:
- **Executive Summary**: High-level recommendation
- **Consensus View**:
- Areas of agreement
- Areas of disagreement
- Balanced assessment
- Critical considerations
- **Recommendation**: Overall guidance
- **Next Steps**: Concrete action items
- **Uncertainty Flags**: Areas needing more information
- **Overall Confidence**: Synthesis confidence level
## Development
### Project Structure
```
elrond-mcp/
├── elrond_mcp/
│ ├── __init__.py
│ ├── server.py # MCP server implementation
│ ├── agents.py # Critique and synthesis agents
│ ├── client.py # Centralized Google AI client management
│ └── models.py # Pydantic data models
├── scripts/ # Development scripts
│ └── check.sh # Quality check script
├── tests/ # Test suite
├── main.py # Entry point
├── pyproject.toml # Project configuration
└── README.md
```
### Running Tests
```bash
# Using uv
uv run pytest
# Using pip
pytest
```
### Code Formatting
```bash
# Format and lint code
uv run ruff format .
uv run ruff check --fix .
# Type checking
uv run mypy elrond_mcp/
```
### Development Script
For convenience, use the provided script to run all quality checks:
```bash
# Run all quality checks (lint, format, test)
./scripts/check.sh
```
This script will:
- Sync dependencies
- Run Ruff linter with auto-fix
- Format code with Ruff
- Execute the full test suite
- Perform final lint check
- Provide a pre-commit checklist
## Configuration
### Environment Variables
- `GEMINI_API_KEY`: Required Google AI API key
- `LOG_LEVEL`: Logging level (default: INFO)
### Model Configuration
- **Critique Agents**: `gemini-2.5-flash`
- **Synthesis Agent**: `gemini-2.5-pro`
Models can be customized by modifying the agent initialization in `agents.py`.
## Troubleshooting
### Common Issues
1. **API Key Not Found**
```
Error: Google AI API key is required
```
**Solution**: Set the `GEMINI_API_KEY` environment variable
2. **Empty Proposal Error**
```
Error: Proposal cannot be empty
```
**Solution**: Ensure your proposal is at least 10 characters long
3. **Model Rate Limits**
```
Error: Rate limit exceeded
```
**Solution**: Wait a moment and retry, or check your Google AI quota
4. **Validation Errors**
```
ValidationError: ...
```
**Solution**: The LLM response didn't match expected structure. This is usually temporary - retry the request
### Debugging
Enable debug logging:
```bash
export LOG_LEVEL=DEBUG
export GEMINI_API_KEY=your-api-key-here
python main.py
```
Check system status:
```python
# Use the check_system_status tool to verify configuration
```
## Contributing
1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Add tests for new functionality
5. Run the test suite
6. Submit a pull request
## License
See LICENSE
## Support
For issues and questions:
- Check the troubleshooting section above
- Review the logs for detailed error information
- Open an issue on the repository
## Roadmap
- [ ] Support for additional LLM providers (OpenAI, Anthropic)
- [ ] Custom critique perspectives and personas
- [ ] Performance optimization and caching
- [ ] Advanced synthesis algorithms