Skip to main content
Glama

Elrond MCP

.rules16.3 kB
# Elrond MCP - Project Rules and Guidelines ## Project Overview Elrond MCP is a thinking augmentation Model Context Protocol (MCP) server that implements hierarchical LLM critique and synthesis. The system uses three specialized critique agents (positive, neutral, negative) running in parallel, followed by a synthesis agent that combines perspectives into comprehensive analysis. **Architecture**: Multi-agent system with Google AI integration **Primary Language**: Python 3.13+ **Framework**: FastMCP (Model Context Protocol) **AI Integration**: Google GenAI SDK with instructor for structured outputs **Data Validation**: Pydantic models for type safety and validation ## Core Principles ### 1. MCP Server Development Best Practices - **Tool Design**: Each MCP tool should have a single, well-defined responsibility - **Structured Responses**: Always use Pydantic models for tool return types - **Error Handling**: Provide clear, actionable error messages to MCP clients - **Async Operations**: Use async/await for all I/O operations (API calls, file operations) - **Resource Management**: Use context managers for external resources (API clients, connections) - **Lazy Initialization**: Initialize expensive resources (AI models) only when needed - **Status Monitoring**: Provide health check tools for system monitoring **MCP Tool Guidelines**: - Tools should be idempotent when possible - Use descriptive docstrings that explain purpose, parameters, and return values - Validate inputs early and provide helpful error messages - Log important operations for debugging and monitoring - Return structured data that clients can easily parse and use ### 2. Modern Python Development Practices **Code Style**: - Follow PEP 8 with Ruff formatter (line length: 88 characters) - **Ruff-Only Policy**: This project uses ONLY Ruff for both linting and formatting (no Black, no other linters) - Use type hints for all function parameters and return values - Prefer explicit imports over wildcard imports - Use meaningful variable and function names that describe intent **Dependency Management**: - Use `uv` as the primary package manager - Pin dependencies in `pyproject.toml` with version ranges - Separate dev dependencies from production dependencies - Document any system-level dependencies in README **Python Version**: - Target Python 3.13+ for modern features - Use new union syntax (`|` instead of `Union`) - Leverage structural pattern matching where appropriate - Use `asyncio` for concurrent operations **Project Structure**: ``` elrond-mcp/ ├── elrond_mcp/ # Main package │ ├── __init__.py # Package exports │ ├── server.py # MCP server implementation │ ├── agents.py # AI agents and orchestration │ ├── client.py # Centralized Google AI + Instructor client management │ └── models.py # Pydantic data models ├── tests/ # Test suite ├── main.py # Entry point ├── pyproject.toml # Project configuration with Ruff settings └── .rules # This file - project guidelines ``` **Ruff Configuration**: - Line length: 88 characters (same as Black default) - Target Python version: 3.13 - Enabled rule sets: E, W, F, I, B, C4, UP - Quote style: double quotes - Automatic import sorting and formatting **Coding Philosophy**: - **Functional First**: Prefer functional programming style over object-oriented when possible - Use classes only when they provide clear benefits (data modeling, complex state management) - Favor simple functions with clear inputs/outputs over method-heavy classes - Use module-level state sparingly and only for legitimate singletons (like client caching) ### 3. AI Integration Standards **Centralized Client Management**: - Use the `client.py` module for all Google AI and Instructor integration - Simple 4-function API: `configure()`, `get_critique_client()`, `get_synthesis_client()`, `reset()` - Use `get_critique_client()` and `get_synthesis_client()` for cached client access - Never instantiate Google AI clients directly in agent classes - Automatic configuration from `GEMINI_API_KEY` or `GOOGLE_API_KEY` environment variables - Single client instances cached for performance using instructor's `from_provider()` approach **Google AI Usage**: - Use the new `google-genai` SDK (not the deprecated `google-generativeai`) - Configure API key through `GEMINI_API_KEY` environment variable - Use instructor's `from_provider("google/model-name")` approach for client creation - Handle rate limiting gracefully with exponential backoff - Set appropriate temperature values for different agent types - Use consistent model naming conventions (e.g., "gemini-2.5-flash", "gemini-2.5-pro") **Model Selection**: - Critique agents: `gemini-2.5-flash` (faster, parallel processing) - Synthesis agent: `gemini-2.5-pro` (more capable reasoning) - Document model choices and reasoning in code comments **Prompt Engineering**: - Use system prompts to define agent roles and perspectives - Provide clear, specific instructions for desired output format - Include examples in prompts when helpful - Test prompts thoroughly and document expected behaviors ### 4. Data Validation and Type Safety **Pydantic Models**: - Define clear, comprehensive models for all data structures - Use appropriate validators and constraints - Provide descriptive field descriptions - Use enums for controlled vocabularies - Implement proper error handling for validation failures **Input Validation**: - Validate all inputs at service boundaries - Provide clear error messages for validation failures - Use field validators for complex validation logic - Test edge cases and boundary conditions ### 5. Error Handling and Logging **Error Patterns**: - Use specific exception types for different error categories - Catch and re-raise exceptions with additional context - Never suppress exceptions without logging - Provide actionable error messages to users **Logging Standards**: - Use structured logging with appropriate levels - Log significant operations (start/completion of major tasks) - Include relevant context in log messages - Use logger names that indicate the module/component - Configure logging centrally in the main server module **Exception Hierarchy**: - Define custom exceptions for domain-specific errors - Inherit from appropriate base exceptions - Include relevant context in exception messages - Document when and why exceptions are raised ### 6. Testing Guidelines **Test Structure**: - Use pytest as the testing framework - Organize tests to mirror the source code structure - Name test files with `test_` prefix - Group related tests in test classes **Test Categories**: - Unit tests for individual functions and methods - Integration tests for component interactions - End-to-end tests for complete workflows - Mock external dependencies (API calls) in unit tests **Test Quality**: - Aim for high test coverage on critical paths - Test both success and failure scenarios - Use descriptive test names that explain what is being tested - Include edge cases and boundary conditions - Test async functions properly with pytest-asyncio ### 7. Security Best Practices **API Key Management**: - Never hardcode API keys in source code - Use environment variables for sensitive configuration - Provide clear setup instructions for API key configuration - Mask API keys in logs and status outputs **Input Sanitization**: - Validate and sanitize all user inputs - Use parameterized queries if interacting with databases - Implement rate limiting for resource-intensive operations - Be cautious with user-provided data in AI prompts ### 8. Performance Considerations **Async Programming**: - Use async/await for I/O bound operations - Run independent operations concurrently with `asyncio.gather()` - Avoid blocking operations in async contexts - Use appropriate timeouts for external API calls **Resource Management**: - Initialize expensive resources lazily - Reuse client connections when possible - Implement proper cleanup in error scenarios - Monitor memory usage for long-running processes ### 9. Documentation Standards **Code Documentation**: - Write clear docstrings for all public functions and classes - Use Google-style docstrings with Args, Returns, and Raises sections - Include usage examples in docstrings when helpful - Document complex algorithms or business logic **README Maintenance**: - Keep setup instructions current and accurate - Include troubleshooting section for common issues - Provide examples of usage and integration - Document all environment variables and configuration options ### 10. Development Workflow **Code Changes**: - Run tests before committing changes - Use descriptive commit messages - Format code with Ruff before committing (`ruff format`) - Lint code with Ruff before committing (`ruff check --fix`) - Run full project check with `ruff check .` to ensure compliance - Use `uv run ruff check --fix .` to automatically fix many issues - Use the provided development script for comprehensive checks: `./scripts/check.sh` **Debugging**: - Use logging instead of print statements - Provide detailed error messages with context - Include debug logging for complex operations - Test error paths and edge cases **Performance Monitoring**: - Log processing times for expensive operations - Monitor API usage and rate limits - Track success/failure rates for AI operations - Include metadata in responses for debugging ## Domain-Specific Guidelines ### Thinking Augmentation System **Agent Design**: - Each critique agent should maintain its designated perspective - Synthesis should be impartial and evidence-based - Confidence levels should reflect actual certainty - Key insights should be actionable and specific **Response Quality**: - Ensure critique responses are substantive and well-reasoned - Synthesis should identify genuine consensus and disagreements - Recommendations should be concrete and implementable - Next steps should be prioritized and realistic **System Reliability**: - Handle AI model failures gracefully - Implement retry logic for transient failures - Validate AI responses match expected structure - Provide fallback behaviors when possible ## Common Patterns and Anti-Patterns ### ✅ Good Patterns - Use dependency injection for testability - Implement proper async context managers for resource cleanup - Use structured logging with correlation IDs - Validate inputs at service boundaries - Use type hints consistently throughout codebase - Implement health checks and status endpoints - Use configuration objects instead of scattered constants - Handle errors at appropriate abstraction levels - Break long strings across multiple lines for readability - Use parentheses for long expressions rather than backslashes - Extract complex nested attributes to variables for clarity - Use exception chaining (`raise ... from err`) for better debugging - Centralize external service client creation and configuration - **Prefer functional programming style over unnecessary classes** - Use module-level state and caching for shared resources - **Keep APIs minimal - avoid redundant functions with similar purposes** - Implement simple functions over class methods when possible - Use pure functions where state is not required ### ❌ Anti-Patterns - Don't block async event loops with synchronous operations - Don't hardcode configuration values - Don't ignore exceptions or validation errors - Don't use global state for request-specific data - Don't mix business logic with presentation logic - Don't make API calls without timeout handling - Don't log sensitive information (API keys, personal data) - Don't use bare except clauses without specific error handling - Don't ignore Ruff warnings - fix them or explicitly ignore with comments - Don't use unused variables - rename to `_var` or remove entirely - Don't exceed 88 character line limits - break strings and expressions - Don't suppress Ruff checks without documenting why - Don't create Google AI clients directly in business logic modules - Don't duplicate API key configuration across multiple modules - Don't skip proper client error handling and logging - **Don't create classes when simple functions will suffice** - Don't use object-oriented patterns for stateless operations - Don't over-engineer with unnecessary abstraction layers - **Don't create multiple functions that do essentially the same thing** ## File-Specific Guidelines ### `models.py` - Define comprehensive Pydantic models for all data structures - Use appropriate field types and validation - Include clear field descriptions - Test model validation thoroughly ### `client.py` - Maintain minimal 4-function API: `configure()`, `get_critique_client()`, `get_synthesis_client()`, `reset()` - Auto-configure from `GEMINI_API_KEY` or `GOOGLE_API_KEY` environment variables when needed - Use instructor's `from_provider("google/model-name", async_client=True)` for client creation - Cache client instances at module level for performance - Handle client creation failures with clear error messages - Use `reset()` function for testing scenarios requiring clean state - Avoid redundant functions that serve similar purposes - Use the new `google-genai` SDK, not the deprecated `google-generativeai` ### `agents.py` - Use centralized client creation from `client.py` module - Never handle API key configuration directly - Implement clean separation between different agent types - Use consistent prompt engineering patterns - Handle AI API failures gracefully - Log processing steps for debugging ### `server.py` - Keep MCP tool implementations focused and simple - Use centralized client configuration checking - Provide comprehensive error handling - Include status and health check tools - Use appropriate logging levels ### `tests/` - Mock external dependencies consistently - Test both success and failure paths - Use descriptive test names and structure - Maintain high coverage on critical functionality - Test client creation and configuration scenarios ## Environment and Configuration **Required Environment Variables**: - `GEMINI_API_KEY`: Google AI API key (required, preferred) - `GOOGLE_API_KEY`: Alternative Google AI API key (fallback) - `LOG_LEVEL`: Logging level (optional, default: INFO) **Development Setup**: 1. Copy `.env.example` to `.env` 2. Configure Google AI API key 3. Run `uv sync --dev --all-extras` 4. Check code quality with `uv run ruff check .` 5. Format code with `uv run ruff format .` 6. Execute tests with `uv run pytest` **Pre-commit Checklist**: - [ ] `uv run ruff check . --fix` (fix linting issues) - [ ] `uv run ruff format .` (format code) - [ ] `uv run pytest` (run tests) - [ ] Check that API key configuration works (GEMINI_API_KEY or GOOGLE_API_KEY) **Quick Quality Check**: Use the provided script to run all checks at once: ```bash ./scripts/check.sh ``` This script runs all the above checks automatically and provides a completion summary. ## Code Quality Standards **Ruff-Only Success**: This project has successfully transitioned to using Ruff as the single tool for both code formatting and linting. All code in the project passes Ruff checks with the configuration specified in `pyproject.toml`. This approach simplifies the development workflow while maintaining high code quality standards. **Quality Metrics**: - 100% Ruff compliance across all Python files - Comprehensive test coverage for Pydantic models and client functionality - Type hints throughout the codebase - Consistent error handling patterns - Well-documented APIs and functions - Modern Google GenAI SDK integration with instructor ## Future Enhancements When extending this project, consider: - Additional LLM provider support (OpenAI, Anthropic) - Custom critique perspectives and personas - Caching mechanisms for expensive operations - Web interface for standalone usage - Advanced synthesis algorithms - Performance optimization and monitoring - Integration with more MCP clients Remember: This system is designed to augment human thinking, not replace it. Focus on providing valuable, actionable insights that help users make better decisions.

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/dogonthehorizon/elrond-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server