Provides ability to feed code changes through git diff as context for AI model queries
Enables querying Google's Gemini 2.5 Pro model with file context and automatically constructed prompts from markdown and code files
Supports automatic prompt construction from markdown files which become the main prompt when querying AI models
Enables querying OpenAI's o3 model with file context and automatically constructed prompts from markdown and code files
Consult LLM MCP
An MCP server that lets Claude Code consult stronger AI models (o3, Gemini 2.5 Pro, DeepSeek Reasoner) when you need deeper analysis on complex problems.
Features
- Query powerful AI models (o3, Gemini 2.5 Pro, DeepSeek Reasoner) with file context
- Automatic prompt construction from markdown and code files
- Git diff to feed code changes
- Usage tracking with cost estimation
- Comprehensive logging
Configuration
OPENAI_API_KEY
- Your OpenAI API key (required for o3)GEMINI_API_KEY
- Your Google AI API key (required for Gemini models)DEEPSEEK_API_KEY
- Your DeepSeek API key (required for DeepSeek models)CONSULT_LLM_DEFAULT_MODEL
- Override the default model (optional)- Options:
o3
(default),gemini-2.5-pro
,deepseek-reasoner
- Options:
Usage with Claude Code
Installation
Add the MCP server to Claude Code:
Or for global availability:
Example workflows
Click to expand.
MCP Tool: consult_llm
The server provides a single tool called consult_llm
for asking powerful AI
models complex questions.
Parameters
- files (required): Array of file paths to process
- Markdown files (.md) become the main prompt
- Other files are added as context with file paths and code blocks
- model (optional): LLM model to use
- Options:
o3
(default),gemini-2.5-pro
,deepseek-reasoner
- Options:
- git_diff (optional): Include git diff output as context
- files (required): Specific files to include in diff
- repo_path (optional): Path to git repository (defaults to current directory)
- base_ref (optional): Git reference to compare against (defaults to HEAD)
Example Usage
Supported Models
- o3: OpenAI's reasoning model ($2/$8 per million tokens)
- gemini-2.5-pro: Google's Gemini 2.5 Pro ($1.25/$10 per million tokens)
- deepseek-reasoner: DeepSeek's reasoning model ($0.55/$2.19 per million tokens)
Logging
All prompts and responses are logged to ~/.consult-llm-mcp/logs/mcp.log
with:
- Tool call parameters
- Full prompts and responses
- Token usage and cost estimates
CLAUDE.md example
To help Claude Code understand when and how to use this tool, you can add the
following to your project's CLAUDE.md
file:
Tools
An MCP server that lets Claude Code consult stronger AI models (o3, Gemini 2.5 Pro, DeepSeek Reasoner) when you need deeper analysis on complex problems.
Related MCP Servers
- -securityFlicense-qualityA Model Context Protocol server that enables Claude users to access specialized OpenAI agents (web search, file search, computer actions) and a multi-agent orchestrator through the MCP protocol.Last updated -1Python
- -securityAlicense-qualityAn MCP server that implements Claude Code-like functionality, allowing the AI to analyze codebases, modify files, execute commands, and manage projects through direct file system interactions.Last updated -179PythonMIT License
- -securityAlicense-qualityA minimal MCP Server that provides Claude AI models with the 'think' tool capability, enabling better performance on complex reasoning tasks by allowing the model to pause during response generation for additional thinking steps.Last updated -5251TypeScriptMIT License
- AsecurityFlicenseAqualityAn MCP server that connects Gemini 2.5 Pro to Claude Code, enabling users to generate detailed implementation plans based on their codebase and receive feedback on code changes.Last updated -23Python