Integrates with Google's Gemini models for cross-checking responses against other LLM providers
Provides integration with OpenAI's ChatGPT models to compare responses against other LLMs
Allows parallel querying of Perplexity AI models to compare responses with other LLM providers
Multi LLM Cross-Check MCP Server
A Model Control Protocol (MCP) server that allows cross-checking responses from multiple LLM providers simultaneously. This server integrates with Claude Desktop as an MCP server to provide a unified interface for querying different LLM APIs.
Features
- Query multiple LLM providers in parallel
- Currently supports:
- OpenAI (ChatGPT)
- Anthropic (Claude)
- Perplexity AI
- Google (Gemini)
- Asynchronous parallel processing for faster responses
- Easy integration with Claude Desktop
Prerequisites
- Python 3.8 or higher
- API keys for the LLM providers you want to use
- uv package manager (install with
pip install uv
)
Installation
Installing via Smithery
To install Multi LLM Cross-Check Server for Claude Desktop automatically via Smithery:
Manual Installation
- Clone this repository:
- Initialize uv environment and install requirements:
- Configure in Claude Desktop:
Create a file named
claude_desktop_config.json
in your Claude Desktop configuration directory with the following content:Notes:- You only need to add the API keys for the LLM providers you want to use. The server will skip any providers without configured API keys.
- You may need to put the full path to the uv executable in the command field. You can get this by running
which uv
on MacOS/Linux orwhere uv
on Windows.
Using the MCP Server
Once configured:
- The server will automatically start when you open Claude Desktop
- You can use the
cross_check
tool in your conversations by asking to "cross check with other LLMs" - Provide a prompt, and it will return responses from all configured LLM providers
API Response Format
The server returns a dictionary with responses from each LLM provider:
Error Handling
- If an API key is not provided for a specific LLM, that provider will be skipped
- API errors are caught and returned in the response
- Each LLM's response is independent, so errors with one provider won't affect others
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
This project is licensed under the MIT License - see the LICENSE file for details.
This server cannot be installed
remote-capable server
The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.
A Model Control Protocol server that integrates with Claude Desktop to enable simultaneous querying and cross-checking of responses from multiple LLM providers including OpenAI, Anthropic, Perplexity AI, and Google Gemini.
Related MCP Servers
- -securityAlicense-qualityModel Context Protocol (MCP) server implementation that enables Claude Desktop to interact with Google's Gemini AI models.Last updated -182JavaScriptMIT License
- -securityFlicense-qualityA Model Control Protocol server that provides web search capabilities and similarity search functionality for Claude Desktop, allowing users to perform web searches and extract relevant information from previous search results.Last updated -2Python
- -securityAlicense-qualityA Model Context Protocol server that enables Claude to collaborate with Google's Gemini AI models, providing tools for question answering, code review, brainstorming, test generation, and explanations.Last updated -PythonMIT License
- -securityFlicense-qualityA Model Context Protocol server that gives Claude access to multiple AI models (Gemini, OpenAI, OpenRouter) for enhanced code analysis, problem-solving, and collaborative development through AI orchestration with conversations that continue across tasks.Last updated -5,128Python