Integrates with Google's Gemini 2.5 Pro model, providing Claude access to Gemini's capabilities for extended thinking, code analysis, debugging, and problem-solving across massive context windows (1M tokens).
Gemini MCP Server for Claude Code
🤖 Claude + Gemini = Your Ultimate AI Development Team
The ultimate development partner for Claude - a Model Context Protocol server that gives Claude access to Google's Gemini 2.5 Pro for extended thinking, code analysis, and problem-solving. Automatically reads files and directories, passing their contents to Gemini for analysis within its 1M token context.
Why This Server?
Claude is brilliant, but sometimes you need:
- A second opinion on complex architectural decisions - augment Claude's extended thinking with Gemini's perspective (
think_deeper
) - Massive context window (1M tokens) - Gemini 2.5 Pro can analyze entire codebases, read hundreds of files at once, and provide comprehensive insights (
analyze
) - Deep code analysis across massive codebases that exceed Claude's context limits (
analyze
) - Expert debugging for tricky issues with full system context (
debug_issue
) - Professional code reviews with actionable feedback across entire repositories (
review_code
) - Pre-commit validation with deep analysis that finds edge cases, validates your implementation against original requirements, and catches subtle bugs Claude might miss (
review_pending_changes
) - A senior developer partner to validate and extend ideas (
chat
) - Dynamic collaboration - Gemini can request additional context from Claude mid-analysis for more thorough insights
This server makes Gemini your development sidekick, handling what Claude can't or extending what Claude starts.
File & Directory Support
All tools accept both individual files and entire directories. The server:
- Automatically expands directories to find all code files recursively
- Intelligently filters hidden files, caches, and non-code files
- Handles mixed inputs like
"analyze main.py, src/, and tests/"
- Manages token limits by loading as many files as possible within Gemini's context
Quickstart (5 minutes)
1. Get a Gemini API Key
Visit Google AI Studio and generate an API key. For best results with Gemini 2.5 Pro, use a paid API key as the free tier has limited access to the latest models.
2. Clone the Repository
Clone this repository to a location on your computer:
Note the full path - you'll need it in the next step:
- macOS/Linux:
/Users/YOUR_USERNAME/gemini-mcp-server
- Windows:
C:\Users\YOUR_USERNAME\gemini-mcp-server
3. Configure Claude Desktop
Add the server to your claude_desktop_config.json
:
Find your config file:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
- Windows:
%APPDATA%\Claude\claude_desktop_config.json
Add this configuration (replace with YOUR actual paths):
macOS/Linux:
Windows:
Important:
- Replace
YOUR_USERNAME
with your actual username - Use the full absolute path where you cloned the repository
- Windows users: Note the double backslashes
\\
in the path
4. Restart Claude Desktop
Completely quit and restart Claude Desktop for the changes to take effect.
5. Connect to Claude Code
To use the server in Claude Code, run:
6. Start Using It!
Just ask Claude naturally:
- "Use gemini to think deeper about this architecture design" →
think_deeper
- "Get gemini to review this code for security issues" →
review_code
- "Get gemini to debug why this test is failing" →
debug_issue
- "Use gemini to analyze these files to understand the data flow" →
analyze
- "Brainstorm with gemini about scaling strategies" →
chat
- "Share my implementation plan with gemini for feedback" →
chat
- "Get gemini's opinion on my authentication design" →
chat
Available Tools
Quick Tool Selection Guide:
- Need deeper thinking? →
think_deeper
(extends Claude's analysis, finds edge cases) - Code needs review? →
review_code
(bugs, security, performance issues) - Pre-commit validation? →
review_pending_changes
(validate git changes before committing) - Something's broken? →
debug_issue
(root cause analysis, error tracing) - Want to understand code? →
analyze
(architecture, patterns, dependencies) - Need a thinking partner? →
chat
(brainstorm ideas, get second opinions, validate approaches) - Check models? →
list_models
(see available Gemini models) - Server info? →
get_version
(version and configuration details)
Tools Overview:
think_deeper
- Extended reasoning and problem-solvingreview_code
- Professional code review with severity levelsreview_pending_changes
- Validate git changes before committingdebug_issue
- Root cause analysis and debugginganalyze
- General-purpose file and code analysischat
- Collaborative thinking and development conversationslist_models
- List available Gemini modelsget_version
- Get server version and configuration
1. think_deeper
- Extended Reasoning Partner
Prompt:
Get a second opinion to augment Claude's own extended thinking
Example Prompts:
Basic Usage:
Collaborative Workflow:
Key Features:
- Uses Gemini's specialized thinking models for enhanced reasoning capabilities
- Provides a second opinion on Claude's analysis
- Challenges assumptions and identifies edge cases Claude might miss
- Offers alternative perspectives and approaches
- Validates architectural decisions and design patterns
- Can reference specific files for context:
"Use gemini to think deeper about my API design with reference to api/routes.py"
Triggers: think deeper, ultrathink, extend my analysis, validate my approach
2. review_code
- Professional Code Review
Comprehensive code analysis with prioritized feedback
Example Prompts:
Basic Usage:
Collaborative Workflow:
Key Features:
- Issues prioritized by severity (🔴 CRITICAL → 🟢 LOW)
- Supports specialized reviews: security, performance, quick
- Can enforce coding standards:
"Use gemini to review src/ against PEP8 standards"
- Filters by severity:
"Get gemini to review auth/ - only report critical vulnerabilities"
Triggers: review code, check for issues, find bugs, security check
3. review_pending_changes
- Pre-Commit Validation
Comprehensive review of staged/unstaged git changes across multiple repositories
Example Prompts:
Basic Usage:
Collaborative Workflow:
Key Features:
- Recursive repository discovery - finds all git repos including nested ones
- Validates changes against requirements - ensures implementation matches intent
- Detects incomplete changes - finds added functions never called, missing tests, etc.
- Multi-repo support - reviews changes across multiple repositories in one go
- Configurable scope - review staged, unstaged, or compare against branches
- Security focused - catches exposed secrets, vulnerabilities in new code
- Smart truncation - handles large diffs without exceeding context limits
Parameters:
path
: Starting directory to search for repos (default: current directory)original_request
: The requirements/ticket for contextcompare_to
: Compare against a branch/tag instead of local changesreview_type
: full|security|performance|quickseverity_filter
: Filter by issue severitymax_depth
: How deep to search for nested repos
Triggers: review pending changes, check my changes, validate changes, pre-commit review
4. debug_issue
- Expert Debugging Assistant
Root cause analysis for complex problems
Example Prompts:
Basic Usage:
Collaborative Workflow:
Key Features:
- Generates multiple ranked hypotheses for systematic debugging
- Accepts error context, stack traces, and logs
- Can reference relevant files for investigation
- Supports runtime info and previous attempts
- Provides structured root cause analysis with validation steps
- Can request additional context when needed for thorough analysis
Triggers: debug, error, failing, root cause, trace, not working
5. analyze
- Smart File Analysis
General-purpose code understanding and exploration
Example Prompts:
Basic Usage:
Collaborative Workflow:
Key Features:
- Analyzes single files or entire directories
- Supports specialized analysis types: architecture, performance, security, quality
- Uses file paths (not content) for clean terminal output
- Can identify patterns, anti-patterns, and refactoring opportunities
Triggers: analyze, examine, look at, understand, inspect
6. chat
- General Development Chat & Collaborative Thinking
Your thinking partner - bounce ideas, get second opinions, brainstorm collaboratively
Example Prompts:
Basic Usage:
Collaborative Workflow:
Key Features:
- Collaborative thinking partner for your analysis and planning
- Get second opinions on your designs and approaches
- Brainstorm solutions and explore alternatives together
- Validate your checklists and implementation plans
- General development questions and explanations
- Technology comparisons and best practices
- Architecture and design discussions
- Can reference files for context:
"Use gemini to explain this algorithm with context from algorithm.py"
- Dynamic collaboration: Gemini can request additional files or context during the conversation if needed for a more thorough response
Triggers: ask, explain, compare, suggest, what about, brainstorm, discuss, share my thinking, get opinion
7. list_models
- See Available Gemini Models
8. get_version
- Server Information
Tool Parameters
All tools that work with files support both individual files and entire directories. The server automatically expands directories, filters for relevant code files, and manages token limits.
File-Processing Tools
analyze
- Analyze files or directories
files
: List of file paths or directories (required)question
: What to analyze (required)analysis_type
: architecture|performance|security|quality|generaloutput_format
: summary|detailed|actionablethinking_mode
: minimal|low|medium|high|max (default: medium)
review_code
- Review code files or directories
files
: List of file paths or directories (required)review_type
: full|security|performance|quickfocus_on
: Specific aspects to focus onstandards
: Coding standards to enforceseverity_filter
: critical|high|medium|allthinking_mode
: minimal|low|medium|high|max (default: medium)
debug_issue
- Debug with file context
error_description
: Description of the issue (required)error_context
: Stack trace or logsfiles
: Files or directories related to the issueruntime_info
: Environment detailsprevious_attempts
: What you've triedthinking_mode
: minimal|low|medium|high|max (default: medium)
think_deeper
- Extended analysis with file context
current_analysis
: Your current thinking (required)problem_context
: Additional contextfocus_areas
: Specific aspects to focus onfiles
: Files or directories for contextthinking_mode
: minimal|low|medium|high|max (default: max)
Collaborative Workflows
Design → Review → Implement
Code → Review → Fix
Debug → Analyze → Solution
Pro Tips
Natural Language Triggers
The server recognizes natural phrases. Just talk normally:
- ❌ "Use the think_deeper tool with current_analysis parameter..."
- ✅ "Use gemini to think deeper about this approach"
Automatic Tool Selection
Claude will automatically pick the right tool based on your request:
- "review" →
review_code
- "debug" →
debug_issue
- "analyze" →
analyze
- "think deeper" →
think_deeper
Clean Terminal Output
All file operations use paths, not content, so your terminal stays readable even with large files.
Context Awareness
Tools can reference files for additional context:
Tool Selection Guidance
To help choose the right tool for your needs:
Decision Flow:
- Have a specific error/exception? → Use
debug_issue
- Want to find bugs/issues in code? → Use
review_code
- Want to understand how code works? → Use
analyze
- Have analysis that needs extension/validation? → Use
think_deeper
- Want to brainstorm or discuss? → Use
chat
Key Distinctions:
analyze
vsreview_code
: analyze explains, review_code prescribes fixeschat
vsthink_deeper
: chat is open-ended, think_deeper extends specific analysisdebug_issue
vsreview_code
: debug diagnoses runtime errors, review finds static issues
Advanced Features
Dynamic Context Requests
Tools can request additional context from Claude during execution. When Gemini needs more information to provide a thorough analysis, it will ask Claude for specific files or clarification, enabling true collaborative problem-solving.
Example: If Gemini is debugging an error but needs to see a configuration file that wasn't initially provided, it can request:
Claude will then provide the requested files and Gemini can continue with a more complete analysis.
Standardized Response Format
All tools now return structured JSON responses for consistent handling:
This enables better integration, error handling, and support for the dynamic context request feature.
Enhanced Thinking Models
All tools support a thinking_mode
parameter that controls Gemini's thinking budget for deeper reasoning:
Thinking Modes:
minimal
: Minimum thinking (128 tokens for Gemini 2.5 Pro)low
: Light reasoning (2,048 token thinking budget)medium
: Balanced reasoning (8,192 token thinking budget - default for all tools)high
: Deep reasoning (16,384 token thinking budget)max
: Maximum reasoning (32,768 token thinking budget - default for think_deeper)
When to use:
minimal
: For simple, straightforward taskslow
: For tasks requiring basic reasoningmedium
: For most development tasks (default)high
: For complex problems requiring thorough analysismax
: For the most complex problems requiring exhaustive reasoning
Note: Gemini 2.5 Pro requires a minimum of 128 thinking tokens, so thinking cannot be fully disabled
Configuration
The server includes several configurable properties that control its behavior:
Model Configuration
DEFAULT_MODEL
:"gemini-2.5-pro-preview-06-05"
- The latest Gemini 2.5 Pro model with native thinking supportMAX_CONTEXT_TOKENS
:1,000,000
- Maximum input context (1M tokens for Gemini 2.5 Pro)
Temperature Defaults
Different tools use optimized temperature settings:
TEMPERATURE_ANALYTICAL
:0.2
- Used for code review and debugging (focused, deterministic)TEMPERATURE_BALANCED
:0.5
- Used for general chat (balanced creativity/accuracy)TEMPERATURE_CREATIVE
:0.7
- Used for deep thinking and architecture (more creative)
File Path Requirements
All file paths must be absolute paths.
Setup
- Use absolute paths in all tool calls:
- Set MCP_PROJECT_ROOT to your project directory for security:The server only allows access to files within this directory.
Installation
- Clone the repository:
- Create virtual environment:
- Install dependencies:
- Set your Gemini API key:
How System Prompts Work
The server uses carefully crafted system prompts to give each tool specialized expertise:
Prompt Architecture
- Centralized Prompts: All system prompts are defined in
prompts/tool_prompts.py
- Tool Integration: Each tool inherits from
BaseTool
and implementsget_system_prompt()
- Prompt Flow:
User Request → Tool Selection → System Prompt + Context → Gemini Response
Specialized Expertise
Each tool has a unique system prompt that defines its role and approach:
think_deeper
: Acts as a senior development partner, challenging assumptions and finding edge casesreview_code
: Expert code reviewer with security/performance focus, uses severity levelsdebug_issue
: Systematic debugger providing root cause analysis and prevention strategiesanalyze
: Code analyst focusing on architecture, patterns, and actionable insights
Customization
To modify tool behavior, you can:
- Edit prompts in
prompts/tool_prompts.py
for global changes - Override
get_system_prompt()
in a tool class for tool-specific changes - Use the
temperature
parameter to adjust response style (0.2 for focused, 0.7 for creative)
Contributing
We welcome contributions! The modular architecture makes it easy to add new tools:
- Create a new tool in
tools/
- Inherit from
BaseTool
- Implement required methods (including
get_system_prompt()
) - Add your system prompt to
prompts/tool_prompts.py
- Register your tool in
TOOLS
dict inserver.py
See existing tools for examples.
Testing
Unit Tests (No API Key Required)
The project includes comprehensive unit tests that use mocks and don't require a Gemini API key:
Live Integration Tests (API Key Required)
To test actual API integration:
GitHub Actions CI/CD
The project includes GitHub Actions workflows that:
- ✅ Run unit tests automatically - No API key needed, uses mocks
- ✅ Test on Python 3.10, 3.11, 3.12 - Ensures compatibility
- ✅ Run linting and formatting checks - Maintains code quality
- 🔒 Run live tests only if API key is available - Optional live verification
The CI pipeline works without any secrets and will pass all tests using mocked responses. Live integration tests only run if a GEMINI_API_KEY
secret is configured in the repository.
License
MIT License - see LICENSE file for details.
Acknowledgments
Built with the power of Claude + Gemini collaboration 🤝
- MCP (Model Context Protocol) by Anthropic
- Claude Code - Your AI coding assistant
- Gemini 2.5 Pro - Extended thinking & analysis engine
This server cannot be installed
hybrid server
The server is able to function both locally and remotely, depending on the configuration or use case.
A Model Context Protocol server that gives Claude access to Google's Gemini 2.5 Pro for extended thinking, code analysis, and problem-solving with a massive context window.
- Why This Server?
- File & Directory Support
- Quickstart (5 minutes)
- Available Tools
- think_deeper - Extended Reasoning Partner
- review_code - Professional Code Review
- review_pending_changes - Pre-Commit Validation
- debug_issue - Expert Debugging Assistant
- analyze - Smart File Analysis
- chat - General Development Chat & Collaborative Thinking
- list_models - See Available Gemini Models
- get_version - Server Information
- Tool Parameters
- Collaborative Workflows
- Pro Tips
- Advanced Features
- Configuration
- File Path Requirements
- Installation
- How System Prompts Work
- Contributing
- Testing
- License
- Acknowledgments
Related MCP Servers
- -securityAlicense-qualityModel Context Protocol (MCP) server implementation that enables Claude Desktop to interact with Google's Gemini AI models.Last updated -53TypeScriptMIT License
- AsecurityAlicenseAqualityA TypeScript server that integrates Google's Gemini Pro model with Claude Desktop through the Model Context Protocol, allowing Claude users to access Gemini's text generation capabilities.Last updated -15TypeScriptMIT License
- -securityFlicense-qualityA server implementing the Model Context Protocol that enables AI assistants like Claude to interact with Google's Gemini API for text generation, text analysis, and chat conversations.Last updated -Python
- -securityFlicense-qualityA Model Context Protocol server that enables Claude Desktop to interact with Google's Gemini 2.5 Pro Experimental AI model, with features like Google Search integration and token usage reporting.Last updated -JavaScript