The GemForge-Gemini-Tools-MCP server integrates Google's Gemini AI with the MCP ecosystem, providing AI-powered capabilities through specialized tools:
- Real-Time Web Access: Use
gemini_search
to retrieve current information and fact-check data - Complex Problem Solving: Apply
gemini_reason
for step-by-step solutions to math, science, and coding challenges - Code Analysis: Leverage
gemini_code
to analyze codebases, understand structure, and suggest improvements - Multi-File Operations: Process 60+ file formats (text, PDF, images, XML) with
gemini_fileops
for summarization, extraction, and analysis
Additional features include cross-ecosystem integration with other MCP agents, intelligent model selection based on task requirements, and enterprise-grade reliability with error handling, rate limit management, and automatic fallback mechanisms.
Provides specialized tools for interacting with Google's Gemini AI models, featuring intelligent model selection based on task type, advanced file handling capabilities, and optimized prompts for different use cases such as search, reasoning, code analysis, and file operations.
GemForge (Gemini Tools)
Overview
GemForge-Gemini-Tools-MCP: Enterprise-grade Gemini integration for your favorite MCP agents. Supercharge Claude, Roo Code, and Windsurf with codebase analysis, live search, text/PDF/image processing, and more.
Quick Navigation
Why GemForge?
GemForge is the essential bridge between Google's Gemini AI and the MCP ecosystem:
- Real-Time Web Access: Fetch breaking news, market trends, and current data with
gemini_search
- Advanced Reasoning: Process complex logic problems with step-by-step thinking via
gemini_reason
- Code Mastery: Analyze full repositories, generate solutions, and debug code with
gemini_code
- Multi-File Processing: Handle 60+ file formats including PDFs, images, and more with
gemini_fileops
- Intelligent Model Selection: Automatically routes to optimal Gemini model for each task
- Enterprise-Ready: Robust error handling, rate limit management, and API fallback mechanisms
Quick Start
One-Line Install
Manual Setup
- Create configuration file (
claude_desktop_config.json
):
- Install and run:
Heavy-Duty Reliability
GemForge is built for production environments:
- Support for 60+ File Types: Process everything from code to documents to images
- Automatic Model Fallbacks: Continues functioning even during rate limits or service disruptions
- Enterprise-Grade Error Logging: Detailed diagnostics for troubleshooting
- API Resilience: Exponential backoff, retry logic, and seamless model switching
- Full Repository Support: Analyze entire codebases with configurable inclusion/exclusion patterns
- XML Content Processing: Specialized handling for structured data
Key Tools
Tool | Description | Key Capability |
---|---|---|
gemini_search | Web-connected information retrieval | Real-time data access |
gemini_reason | Complex problem solving with step-by-step logic | Transparent reasoning process |
gemini_code | Deep code understanding and generation | Full repository analysis |
gemini_fileops | Multi-file processing across 60+ formats | Document comparison and transformation |
Configuration
GemForge offers flexible configuration options:
GemForge intelligently selects the best model for each task:
gemini_search
: Usesgemini-2.5-flash
for speed and search integrationgemini_reason
: Usesgemini-2.5-pro
for deep reasoning capabilitiesgemini_code
: Usesgemini-2.5-pro
for complex code understandinggemini_fileops
: Selects betweengemini-2.0-flash-lite
orgemini-1.5-pro
based on file size
Override with model_id
parameter in any tool call or set DEFAULT_MODEL_ID
environment variable.
Deployment
Smithery.ai
One-click deployment via Smithery.ai
Docker
Self-Hosted
Use our MCP.so Directory listing for integration instructions.
What Sets GemForge Apart?
- Cross-Ecosystem Power: Bridge Google's AI with Claude and other MCP agents
- Multi-File Analysis: Compare documents, images, or code versions
- Smart Routing: Automatic model selection based on task requirements
- Production-Ready: Built for enterprise environments
Community & Support
- Join Us: MCP Discord | GemForge Discord
- Contribute: GitHub Discussions
- Feedback: Open an issue or share thoughts on Discord
Documentation
Visit our Documentation Site for:
- Advanced usage tutorials
- API reference
- Troubleshooting tips
License
Licensed under the MIT License. See LICENSE for details.
Acknowledgments
- Google Gemini API for providing the underlying AI capabilities
- Model Context Protocol (MCP) for standardizing AI tool interfaces
You must be authenticated.
remote-capable server
The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.
MCP server to empower your agent with enterprise-grade Gemini integration for codebase analysis, live search, text/PDF/image processing, and more on your favorite MCP clients.
Related MCP Servers
- -securityAlicense-qualityThe ultimate Gemini API interface for MCP hosts, intelligently selecting models for the task at hand—delivering optimal performance, minimal token cost, and seamless integration.Last updated -6TypeScriptMIT License
- AsecurityAlicenseAqualityA metacognitive pattern interrupt system that helps prevent AI assistants from overcomplicated reasoning paths by providing external validation, simplification guidance, and learning mechanisms.Last updated -354TypeScriptMIT License
- -securityAlicense-qualityA server that enables Claude Desktop to generate images using Google's Gemini AI models through the Model Context Protocol (MCP).Last updated -1JavaScriptMIT License
- -security-license-qualityAn MCP server implementation that allows using Google's Gemini AI models (specifically Gemini 1.5 Pro) through Claude or other MCP clients via the Model Context Protocol.Last updated -1JavaScript