The Gemini Bridge server enables AI coding assistants to interact with Google's Gemini AI through a lightweight, MCP-compatible interface using the official CLI without API costs.
• General Queries: Send direct questions or prompts using consult_gemini
• File Analysis: Attach files for detailed code reviews or multi-file analysis using consult_gemini_with_files
• Model Selection: Choose between Gemini models (e.g., "flash" or "pro")
• Context Directory: Specify working directories relevant to queries
• Custom Timeouts: Configure timeout via GEMINI_BRIDGE_TIMEOUT
for complex queries
• Universal Compatibility: Works with any MCP-compatible client including Claude Code, VS Code, and Cursor
• Stateless Operation: Executes queries independently without sessions or caching
Enables AI coding assistants to interact with Google's Gemini AI through the official CLI, providing tools for general queries and file analysis with support for both flash and pro models
Gemini Bridge
A lightweight MCP (Model Context Protocol) server that enables AI coding assistants to interact with Google's Gemini AI through the official CLI. Works with Claude Code, Cursor, VS Code, and other MCP-compatible clients. Designed for simplicity, reliability, and seamless integration.
✨ Features
Direct Gemini CLI Integration: Zero API costs using official Gemini CLI
Simple MCP Tools: Two core functions for basic queries and file analysis
Stateless Operation: No sessions, caching, or complex state management
Production Ready: Robust error handling with configurable 60-second timeouts
Minimal Dependencies: Only requires
mcp>=1.0.0
and Gemini CLIEasy Deployment: Support for both uvx and traditional pip installation
Universal MCP Compatibility: Works with any MCP-compatible AI coding assistant
🚀 Quick Start
Prerequisites
Install Gemini CLI:
npm install -g @google/gemini-cliAuthenticate with Gemini:
gemini auth loginVerify installation:
gemini --version
Installation
🎯 Recommended: PyPI Installation
Alternative: From Source
Development Installation
🌐 Multi-Client Support
Gemini Bridge works with any MCP-compatible AI coding assistant - the same server supports multiple clients through different configuration methods.
Supported MCP Clients
Claude Code ✅ (Default)
Cursor ✅
VS Code ✅
Windsurf ✅
Cline ✅
Void ✅
Cherry Studio ✅
Augment ✅
Roo Code ✅
Zencoder ✅
Any MCP-compatible client ✅
Configuration Examples
Global Configuration (~/.cursor/mcp.json
):
Project-Specific (.cursor/mcp.json
in your project):
Go to: Settings
→ Cursor Settings
→ MCP
→ Add new global MCP server
Configuration (.vscode/mcp.json
in your workspace):
Alternative: Through Extensions
Open Extensions view (Ctrl+Shift+X)
Search for MCP extensions
Add custom server with command:
uvx gemini-bridge
Add to your Windsurf MCP configuration:
Open Cline and click MCP Servers in the top navigation
Select Installed tab → Advanced MCP Settings
Add to
cline_mcp_settings.json
:
Go to: Settings
→ MCP
→ Add MCP Server
Navigate to Settings → MCP Servers → Add Server
Fill in the server details:
Name:
gemini-bridge
Type:
STDIO
Command:
uvx
Arguments:
["gemini-bridge"]
Save the configuration
Using the UI:
Click hamburger menu → Settings → Tools
Click + Add MCP button
Enter command:
uvx gemini-bridge
Name: Gemini Bridge
Manual Configuration:
Go to Settings → MCP Servers → Edit Global Config
Add to
mcp_settings.json
:
Go to Zencoder menu (...) → Tools → Add Custom MCP
Add configuration:
Hit the Install button
For pip-based installations:
For development/local testing:
For npm-style installation (if needed):
Universal Usage
Once configured with any client, use the same two tools:
Ask general questions: "What authentication patterns are used in this codebase?"
Analyze specific files: "Review these auth files for security issues"
The server implementation is identical - only the client configuration differs!
⚙️ Configuration
Timeout Configuration
By default, Gemini Bridge uses a 60-second timeout for all CLI operations. For longer queries (large files, complex analysis), you can configure a custom timeout using the GEMINI_BRIDGE_TIMEOUT
environment variable.
Example configurations:
Timeout Options:
Default: 60 seconds (if not configured)
Range: Any positive integer (seconds)
Per-call override: Supply
timeout_seconds
to either tool for one-off extensionsRecommended: 120-300 seconds for large file analysis
Invalid values: Fall back to 60 seconds with warning
🛠️ Available Tools
consult_gemini
Direct CLI bridge for simple queries.
Parameters:
query
(string): The question or prompt to send to Geminidirectory
(string): Working directory for the query (default: current directory)model
(string, optional): Model to use - "flash" or "pro" (default: "flash")timeout_seconds
(int, optional): Override the execution timeout for this request
Example:
consult_gemini_with_files
CLI bridge with file attachments for detailed analysis.
Parameters:
query
(string): The question or prompt to send to Geminidirectory
(string): Working directory for the queryfiles
(list): List of file paths relative to the directorymodel
(string, optional): Model to use - "flash" or "pro" (default: "flash")timeout_seconds
(int, optional): Override the execution timeout for this requestmode
(string, optional): Either"inline"
(default) to stream file contents or"at_command"
to let Gemini CLI resolve@path
references itself
Example:
Tip: When scanning large trees, switch to mode="at_command"
so the Gemini CLI handles file globbing and truncation natively.
📋 Usage Examples
Basic Code Analysis
Detailed File Review
Multi-file Analysis
Large File Safeguards
Inline transfers cap at ~256 KB per file and ~512 KB per request to avoid hangs.
Oversized files are truncated to head/tail snippets with a warning in the MCP response.
Tune the caps with environment variables (
GEMINI_BRIDGE_MAX_INLINE_TOTAL_BYTES
, etc.) or prefermode="at_command"
for bigger payloads.
🏗️ Architecture
Core Design
CLI-First: Direct subprocess calls to
gemini
commandStateless: Each tool call is independent with no session state
Adaptive Timeout: Defaults to 60 seconds but overridable per request or via env var
Attachment Guardrails: Inline mode enforces lightweight limits;
@
mode delegates to Gemini CLI toolingSimple Error Handling: Clear error messages with fail-fast approach
Project Structure
🔧 Development
Local Testing
Integration with Claude Code
The server automatically integrates with Claude Code when properly configured through the MCP protocol.
🔍 Troubleshooting
CLI Not Available
Connection Issues
Verify Gemini CLI is properly authenticated
Check network connectivity
Ensure Claude Code MCP configuration is correct
Check that the
gemini
command is in your PATH
Common Error Messages
"CLI not available": Gemini CLI is not installed or not in PATH
"Authentication required": Run
gemini auth login
"Timeout after 60 seconds": Query took too long, try breaking it into smaller parts
🤝 Contributing
We welcome contributions from the community! Please read our Contributing Guidelines for details on how to get started.
Quick Contributing Guide
Fork the repository
Create a feature branch
Make your changes
Add tests if applicable
Submit a pull request
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
🔄 Version History
See CHANGELOG.md for detailed version history.
🆘 Support
Issues: Report bugs or request features via GitHub Issues
Discussions: Join the community discussion
Documentation: Additional docs can be created in the
docs/
directory
Focus: A simple, reliable bridge between Claude Code and Gemini AI through the official CLI.
hybrid server
The server is able to function both locally and remotely, depending on the configuration or use case.
A lightweight MCP server bridging AI agents to Google's Gemini AI via official CLI
Related MCP Servers
- AsecurityFlicenseAqualityA server that provides access to Google Gemini AI capabilities including text generation, image analysis, YouTube video analysis, and web search functionality through the MCP protocol.Last updated -614
- AsecurityAlicenseAqualityA dedicated server that wraps Google's Gemini AI models in a Model Context Protocol (MCP) interface, allowing other LLMs and MCP-compatible systems to access Gemini's capabilities like content generation, function calling, chat, and file handling through standardized tools.Last updated -1632MIT License
- -securityAlicense-qualityAn MCP server that enables other AI models (like Claude) to use Google's Gemini models as tools for specific tasks through a standardized interface.
- AsecurityAlicenseAqualityProfessional Gemini API integration for Claude and MCP-compatible hosts with intelligent model selection and advanced file handling capabilities.Last updated -43MIT License