This server provides access to multiple Large Language Model (LLM) APIs including ChatGPT, Claude, and DeepSeek through a Model Context Protocol (MCP) interface, along with Bitcoin and Lightning network operations.
LLM Capabilities:
Call individual LLMs: Use tools like
call-chatgpt,call-claude, andcall-deepseekto send prompts to specific AI providers with configurable parameters like model, temperature, and token limitsCombine LLM responses: Use
call-all-llmsto send the same prompt to all available LLMs simultaneously and receive combined output with individual responses and a summaryDynamic provider selection: Use
call-llmto select an LLM provider ("chatgpt", "claude", or "deepseek") at runtimeCompare model outputs: Facilitate multi-perspective analysis, model comparison, and quality assurance
Bitcoin & Lightning Network Features:
Generate new Bitcoin key pairs and addresses
Validate Bitcoin addresses
Decode raw Bitcoin transactions from hexadecimal
Retrieve latest Bitcoin block information
Get specific Bitcoin transaction details using transaction ID
Decode BOLT11 Lightning invoices
Pay BOLT11 Lightning invoices
Configuration: Set environment variables for API keys (OPENAI_API_KEY, ANTHROPIC_API_KEY, DEEPSEEK_API_KEY) and default models for each provider.
Used for managing environment variables including API keys and default model configurations for the various LLM providers.
Provides access to OpenAI's ChatGPT API for generating responses from various GPT models with customizable parameters for temperature and token limits.
Implements schema validation for tool parameters to ensure proper formatting of requests to the different LLM APIs.
Cross-LLM MCP Server
A Model Context Protocol (MCP) server that provides access to multiple Large Language Model (LLM) APIs including ChatGPT, Claude, DeepSeek, Gemini, Grok, Kimi, Perplexity, and Mistral. This allows you to call different LLMs from within any MCP-compatible client and combine their responses.
Features
This MCP server offers eight specialized tools for interacting with different LLM providers:
š¤ Individual LLM Tools
call-chatgpt
Call OpenAI's ChatGPT API with a prompt.
Input:
prompt(string): The prompt to send to ChatGPTmodel(optional, string): ChatGPT model to use (default: gpt-4)temperature(optional, number): Temperature for response randomness (0-2, default: 0.7)max_tokens(optional, number): Maximum tokens in response (default: 1000)
Output:
ChatGPT response with model information and token usage statistics
Example:
call-claude
Call Anthropic's Claude API with a prompt.
Input:
prompt(string): The prompt to send to Claudemodel(optional, string): Claude model to use (default: claude-3-sonnet-20240229)temperature(optional, number): Temperature for response randomness (0-1, default: 0.7)max_tokens(optional, number): Maximum tokens in response (default: 1000)
Output:
Claude response with model information and token usage statistics
call-deepseek
Call DeepSeek API with a prompt.
Input:
prompt(string): The prompt to send to DeepSeekmodel(optional, string): DeepSeek model to use (default: deepseek-chat)temperature(optional, number): Temperature for response randomness (0-2, default: 0.7)max_tokens(optional, number): Maximum tokens in response (default: 1000)
Output:
DeepSeek response with model information and token usage statistics
call-gemini
Call Google's Gemini API with a prompt.
Input:
prompt(string): The prompt to send to Geminimodel(optional, string): Gemini model to use (default: gemini-2.5-flash)temperature(optional, number): Temperature for response randomness (0-2, default: 0.7)max_tokens(optional, number): Maximum tokens in response (default: 1000)
Output:
Gemini response with model information and token usage statistics
call-grok
Call xAI's Grok API with a prompt.
Input:
prompt(string): The prompt to send to Grokmodel(optional, string): Grok model to use (default: grok-3)temperature(optional, number): Temperature for response randomness (0-2, default: 0.7)max_tokens(optional, number): Maximum tokens in response (default: 1000)
Output:
Grok response with model information and token usage statistics
call-kimi
Call Moonshot AI's Kimi API with a prompt.
Input:
prompt(string): The prompt to send to Kimimodel(optional, string): Kimi model to use (default: moonshot-v1-8k)temperature(optional, number): Temperature for response randomness (0-2, default: 0.7)max_tokens(optional, number): Maximum tokens in response (default: 1000)
Output:
Kimi response with model information and token usage statistics
call-perplexity
Call Perplexity AI's API with a prompt.
Input:
prompt(string): The prompt to send to Perplexitymodel(optional, string): Perplexity model to use (default: sonar-pro)temperature(optional, number): Temperature for response randomness (0-2, default: 0.7)max_tokens(optional, number): Maximum tokens in response (default: 1000)
Output:
Perplexity response with model information and token usage statistics
call-mistral
Call Mistral AI's API with a prompt.
Input:
prompt(string): The prompt to send to Mistralmodel(optional, string): Mistral model to use (default: mistral-large-latest)temperature(optional, number): Temperature for response randomness (0-2, default: 0.7)max_tokens(optional, number): Maximum tokens in response (default: 1000)
Output:
Mistral response with model information and token usage statistics
š Combined Tools
call-all-llms
Call all available LLM APIs (ChatGPT, Claude, DeepSeek, Gemini, Grok, Kimi, Perplexity, Mistral) with the same prompt and get combined responses.
Input:
prompt(string): The prompt to send to all LLMstemperature(optional, number): Temperature for response randomness (0-2, default: 0.7)max_tokens(optional, number): Maximum tokens in response (default: 1000)
Output:
Combined responses from all LLMs with individual model information and usage statistics
Summary of successful responses and total tokens used
Example:
call-llm
Call a specific LLM provider by name.
Input:
provider(string): The LLM provider to call ("chatgpt", "claude", "deepseek", or "gemini")prompt(string): The prompt to send to the LLMmodel(optional, string): Model to use (uses provider default if not specified)temperature(optional, number): Temperature for response randomness (0-2, default: 0.7)max_tokens(optional, number): Maximum tokens in response (default: 1000)
Output:
Response from the specified LLM with model information and usage statistics
Installation
Clone this repository:
Install dependencies:
Build the project:
Getting API Keys
OpenAI/ChatGPT
Visit OpenAI Platform
Sign up or log in to your account
Create a new API key
Add it to your
.envfile asOPENAI_API_KEY
Anthropic/Claude
Visit Anthropic Console
Sign up or log in to your account
Create a new API key
Add it to your
.envfile asANTHROPIC_API_KEY
DeepSeek
Visit DeepSeek Platform
Sign up or log in to your account
Create a new API key
Add it to your
.envfile asDEEPSEEK_API_KEY
Google Gemini
Visit Google AI Studio
Sign up or log in to your Google account
Create a new API key
Add it to your Claude Desktop configuration as
GEMINI_API_KEY
xAI/Grok
Visit xAI Platform
Sign up or log in to your account
Create a new API key
Add it to your Claude Desktop configuration as
XAI_API_KEY
Moonshot AI/Kimi
Visit Moonshot AI Platform
Sign up or log in to your account
Create a new API key
Add it to your Claude Desktop configuration as
KIMI_API_KEY
Perplexity AI
Visit the Perplexity AI Platform
Sign up or log in to your account
Generate a new API key from the developer console
Add it to your Claude Desktop configuration as
PERPLEXITY_API_KEY
Mistral AI
Visit the Mistral AI Console
Sign up or log in to your account
Create a new API key
Add it to your Claude Desktop configuration as
MISTRAL_API_KEY
Usage
Configuring Claude Desktop
Add the following configuration to your Claude Desktop MCP settings:
Replace the paths and API keys with your actual values:
Update the
argspath to point to yourbuild/index.jsfileUpdate the
cwdpath to your project directoryAdd your actual API keys to the
envsection
Running the Server
The server runs automatically when configured in Claude Desktop. You can also run it manually:
The server runs on stdio and can be connected to any MCP-compatible client.
Example Queries
Here are some example queries you can make with this MCP server:
Call ChatGPT
Call Claude
Call All LLMs
Call Specific LLM
Call Gemini
Call Grok
Call Kimi
Call Perplexity
Call Mistral
Use Cases
1. Multi-Perspective Analysis
Use call-all-llms to get different perspectives on the same topic from multiple AI models.
2. Model Comparison
Compare responses from different LLMs to understand their strengths and weaknesses.
3. Redundancy and Reliability
If one LLM is unavailable, you can still get responses from other providers.
4. Cost Optimization
Choose the most cost-effective LLM for your specific use case.
5. Quality Assurance
Cross-reference responses from multiple models to validate information.
Configuration
Claude Desktop Setup
The recommended way to use this MCP server is through Claude Desktop with environment variables configured directly in the MCP settings:
Environment Variables
The server reads the following environment variables:
OPENAI_API_KEY: Your OpenAI API keyANTHROPIC_API_KEY: Your Anthropic API keyDEEPSEEK_API_KEY: Your DeepSeek API keyGEMINI_API_KEY: Your Google Gemini API keyXAI_API_KEY: Your xAI Grok API keyKIMI_API_KEY: Your Moonshot AI Kimi API keyPERPLEXITY_API_KEY: Your Perplexity AI API keyMISTRAL_API_KEY: Your Mistral AI API keyDEFAULT_CHATGPT_MODEL: Default ChatGPT model (default: gpt-4)DEFAULT_CLAUDE_MODEL: Default Claude model (default: claude-3-sonnet-20240229)DEFAULT_DEEPSEEK_MODEL: Default DeepSeek model (default: deepseek-chat)DEFAULT_GEMINI_MODEL: Default Gemini model (default: gemini-2.5-flash)DEFAULT_GROK_MODEL: Default Grok model (default: grok-3)DEFAULT_KIMI_MODEL: Default Kimi model (default: moonshot-v1-8k)DEFAULT_PERPLEXITY_MODEL: Default Perplexity model (default: sonar-pro)DEFAULT_MISTRAL_MODEL: Default Mistral model (default: mistral-large-latest)
API Endpoints
This MCP server uses the following API endpoints:
OpenAI:
https://api.openai.com/v1/chat/completionsAnthropic:
https://api.anthropic.com/v1/messagesDeepSeek:
https://api.deepseek.com/v1/chat/completionsGoogle Gemini:
https://generativelanguage.googleapis.com/v1/models/{model}:generateContentxAI Grok:
https://api.x.ai/v1/chat/completionsMoonshot AI Kimi:
https://api.moonshot.ai/v1/chat/completionsPerplexity AI:
https://api.perplexity.ai/chat/completionsMistral AI:
https://api.mistral.ai/v1/chat/completions
Error Handling
The server includes comprehensive error handling with detailed messages:
Missing API Key
Invalid API Key
Rate Limiting
Payment Issues
Network Issues
Supported Models
ChatGPT Models
gpt-4gpt-4-turbogpt-3.5-turboAnd other OpenAI models
Claude Models
claude-3-sonnet-20240229claude-3-opus-20240229claude-3-haiku-20240307And other Anthropic models
DeepSeek Models
deepseek-chatdeepseek-coderAnd other DeepSeek models
Gemini Models
gemini-2.5-flash(default)gemini-2.5-progemini-2.0-flashgemini-2.0-flash-001And other Google Gemini models
Grok Models
grok-3(default)And other xAI Grok models
Kimi Models
moonshot-v1-8k(default)moonshot-v1-32kmoonshot-v1-128kAnd other Moonshot AI Kimi models
Perplexity Models
sonar-pro(default)sonar-small-onlinesonar-mediumAnd other Perplexity models
Mistral Models
mistral-large-latest(default)mistral-small-latestmixtral-8x7b-32768And other Mistral models
Project Structure
Dependencies
@modelcontextprotocol/sdk- MCP SDK for server implementationsuperagent- HTTP client for API requestszod- Schema validation for tool parameters
Development
Building the Project
Adding New LLM Providers
To add a new LLM provider:
Add the provider type to
src/types.tsImplement the client in
src/llm-clients.tsAdd the tool to
src/index.tsUpdate the
callAllLLMsmethod to include the new provider
Troubleshooting
Common Issues
Server won't start
Check that all dependencies are installed:
npm installVerify the build was successful:
npm run buildEnsure the
.envfile exists and has valid API keys
API errors
Verify your API keys are correct and active
Check your API usage limits and billing status
Ensure you're using supported model names
No responses
Check that at least one API key is configured
Verify network connectivity
Look for error messages in the response
Debug Mode
For debugging, you can run the server directly:
License
This project is licensed under the MIT License - see the LICENSE.md file for details.
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
Support
If you encounter any issues or have questions, please:
Check the troubleshooting section above
Review the error messages for specific guidance
Ensure your API keys are properly configured
Verify your network connectivity
Related MCP Servers
- -securityAlicense-qualityA Model Context Protocol (MCP) server that enables LLMs to interact directly the documents that they have on-disk through agentic RAG and hybrid search in LanceDB. Ask LLMs questions about the dataset as a whole or about specific documents.Last updated -371MIT License
- AsecurityAlicenseAqualityA Model Context Protocol (MCP) server that enables Claude or other LLMs to fetch content from URLs, supporting HTML, JSON, text, and images with configurable request parameters.Last updated -32MIT License
- -securityAlicense-qualityA Model Context Protocol Server that enables LLMs to interact with and execute REST API calls through natural language prompts, supporting GET/PUT/POST/PATCH operations on configured APIs.Last updated -6Apache 2.0
- -securityAlicense-qualityA Model Context Protocol server that enables LLMs to interact with databases (currently MongoDB) through natural language, supporting operations like querying, inserting, deleting documents, and running aggregation pipelines.Last updated -MIT License