Skip to main content
Glama

Elrond MCP

Elrond MCP - Thinking Augmentation Server

A Model Context Protocol (MCP) server that provides hierarchical LLM critique and synthesis for enhanced decision-making and idea evaluation.

Warning

Preview Software: This is experimental software in active development and is not intended for production use. Features may change, break, or be removed without notice. Use at your own risk.

Overview

Elrond MCP implements a multi-agent thinking augmentation system that analyzes proposals through three specialized critique perspectives (positive, neutral, negative) and synthesizes them into comprehensive, actionable insights. This approach helps overcome single-model biases and provides more thorough analysis of complex ideas.

Features

  • Parallel Critique Analysis: Three specialized agents analyze proposals simultaneously from different perspectives
  • Structured Responses: Uses Pydantic models and instructor library for reliable, structured outputs
  • Google AI Integration: Leverages Gemini 2.5 Flash for critiques and Gemini 2.5 Pro for synthesis
  • MCP Compliance: Full Model Context Protocol support for seamless integration with AI assistants
  • Comprehensive Analysis: Covers feasibility, risks, benefits, implementation, stakeholder impact, and resource requirements
  • Consensus Building: Identifies areas of agreement and disagreement across perspectives

Architecture

┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │ Positive │ │ Neutral │ │ Negative │ │ Critique │ │ Critique │ │ Critique │ │ Agent │ │ Agent │ │ Agent │ │ │ │ │ │ │ │ Gemini 2.5 │ │ Gemini 2.5 │ │ Gemini 2.5 │ │ Flash │ │ Flash │ │ Flash │ └─────────┬───────┘ └─────────┬───────┘ └─────────┬───────┘ │ │ │ │ │ │ └──────────────────────┼──────────────────────┘ │ ▼ ┌─────────────────────────┐ │ Synthesis Agent │ │ │ │ Gemini 2.5 Pro │ │ │ │ │ │ Consensus + Summary │ └─────────────────────────┘

Installation

Prerequisites

Setup

  1. Clone the repository:
    git clone <repository-url> cd elrond-mcp
  2. Install dependencies:
    # Using uv (recommended) uv sync --dev --all-extras # Or using pip pip install -e .[dev]
  3. Configure API key:
    export GEMINI_API_KEY="your-gemini-api-key-here" # Or create a .env file echo "GEMINI_API_KEY=your-gemini-api-key-here" > .env

Usage

Running the Server

Development Mode
# Using uv uv run python main.py # Using MCP CLI (if installed) mcp dev elrond_mcp/server.py
Production Mode
# Direct execution python main.py # Or via package entry point elrond-mcp

Integration with Claude Desktop

  1. Install for Claude Desktop:
    mcp install elrond_mcp/server.py --name "Elrond Thinking Augmentation"
  2. Manual Configuration: Add to your Claude Desktop MCP settings:
    { "elrond-mcp": { "command": "python", "args": ["/path/to/elrond-mcp/main.py"], "env": { "GEMINI_API_KEY": "your-api-key-here" } } }

Using the Tools

Augment Thinking Tool

Analyze any proposal through multi-perspective critique:

Use the "consult_the_council" tool with this proposal: # Project Alpha: AI-Powered Customer Service ## Overview Implement an AI chatbot to handle 80% of customer service inquiries, reducing response time from 2 hours to 30 seconds. ## Goals - Reduce operational costs by 40% - Improve customer satisfaction scores - Free up human agents for complex issues ## Implementation - Deploy GPT-4 based chatbot - Integrate with existing CRM - 3-month rollout plan - $200K initial investment
Check System Status Tool

Monitor the health and configuration of the thinking augmentation system:

Use the "check_system_status" tool to verify: - API key configuration - Model availability - System health

Response Structure

Critique Response

Each critique agent provides:

  • Executive Summary: Brief overview of the perspective
  • Structured Analysis:
    • Feasibility assessment
    • Risk identification
    • Benefit analysis
    • Implementation considerations
    • Stakeholder impact
    • Resource requirements
  • Key Insights: 3-5 critical observations
  • Confidence Level: Numerical confidence (0.0-1.0)

Synthesis Response

The synthesis agent provides:

  • Executive Summary: High-level recommendation
  • Consensus View:
    • Areas of agreement
    • Areas of disagreement
    • Balanced assessment
    • Critical considerations
  • Recommendation: Overall guidance
  • Next Steps: Concrete action items
  • Uncertainty Flags: Areas needing more information
  • Overall Confidence: Synthesis confidence level

Development

Project Structure

elrond-mcp/ ├── elrond_mcp/ │ ├── __init__.py │ ├── server.py # MCP server implementation │ ├── agents.py # Critique and synthesis agents │ ├── client.py # Centralized Google AI client management │ └── models.py # Pydantic data models ├── scripts/ # Development scripts │ └── check.sh # Quality check script ├── tests/ # Test suite ├── main.py # Entry point ├── pyproject.toml # Project configuration └── README.md

Running Tests

# Using uv uv run pytest # Using pip pytest

Code Formatting

# Format and lint code uv run ruff format . uv run ruff check --fix . # Type checking uv run mypy elrond_mcp/

Development Script

For convenience, use the provided script to run all quality checks:

# Run all quality checks (lint, format, test) ./scripts/check.sh

This script will:

  • Sync dependencies
  • Run Ruff linter with auto-fix
  • Format code with Ruff
  • Execute the full test suite
  • Perform final lint check
  • Provide a pre-commit checklist

Configuration

Environment Variables

  • GEMINI_API_KEY: Required Google AI API key
  • LOG_LEVEL: Logging level (default: INFO)

Model Configuration

  • Critique Agents: gemini-2.5-flash
  • Synthesis Agent: gemini-2.5-pro

Models can be customized by modifying the agent initialization in agents.py.

Troubleshooting

Common Issues

  1. API Key Not Found
    Error: Google AI API key is required
    Solution: Set the GEMINI_API_KEY environment variable
  2. Empty Proposal Error
    Error: Proposal cannot be empty
    Solution: Ensure your proposal is at least 10 characters long
  3. Model Rate Limits
    Error: Rate limit exceeded
    Solution: Wait a moment and retry, or check your Google AI quota
  4. Validation Errors
    ValidationError: ...
    Solution: The LLM response didn't match expected structure. This is usually temporary - retry the request

Debugging

Enable debug logging:

export LOG_LEVEL=DEBUG export GEMINI_API_KEY=your-api-key-here python main.py

Check system status:

# Use the check_system_status tool to verify configuration

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests for new functionality
  5. Run the test suite
  6. Submit a pull request

License

See LICENSE

Support

For issues and questions:

  • Check the troubleshooting section above
  • Review the logs for detailed error information
  • Open an issue on the repository

Roadmap

  • Support for additional LLM providers (OpenAI, Anthropic)
  • Custom critique perspectives and personas
  • Performance optimization and caching
  • Advanced synthesis algorithms
-
security - not tested
A
license - permissive license
-
quality - not tested

remote-capable server

The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.

Enables enhanced decision-making through hierarchical LLM analysis, using three specialized critique agents (positive, neutral, negative) that analyze proposals in parallel and synthesize them into comprehensive, actionable insights. Helps overcome single-model biases by providing multi-perspective evaluation of complex ideas and proposals.

  1. Overview
    1. Features
      1. Architecture
        1. Installation
          1. Prerequisites
          2. Setup
        2. Usage
          1. Running the Server
          2. Integration with Claude Desktop
          3. Using the Tools
        3. Response Structure
          1. Critique Response
          2. Synthesis Response
        4. Development
          1. Project Structure
          2. Running Tests
          3. Code Formatting
          4. Development Script
        5. Configuration
          1. Environment Variables
          2. Model Configuration
        6. Troubleshooting
          1. Common Issues
          2. Debugging
        7. Contributing
          1. License
            1. Support
              1. Roadmap

                MCP directory API

                We provide all the information about MCP servers via our MCP API.

                curl -X GET 'https://glama.ai/api/mcp/v1/servers/dogonthehorizon/elrond-mcp'

                If you have feedback or need assistance with the MCP directory API, please join our Discord server