Provides tools for querying Google's Gemini AI models for text generation, reasoning, and analysis, including support for multi-turn conversations, token counting, and listing available models.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Gemini MCP Server for Claude CodeAsk Gemini for a second opinion on this rate limiter implementation"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Gemini MCP Server for Claude Code
Give Claude Code access to Google's Gemini AI models. Get second opinions, compare approaches, and leverage Gemini's capabilities—all from within your Claude Code session.
Table of Contents
Quick Start
Then ask Claude:
"Ask Gemini to explain the tradeoffs between microservices and monoliths"
Features
Built for Claude Code - Seamlessly integrates with your Claude Code workflow
Streaming Responses - Enabled by default for real-time output
Multi-turn Conversations - Maintain context across multiple Gemini queries
Configurable Model - Set your preferred Gemini model via environment variable
Token Counting - Estimate costs before making queries
Type-Safe - Built with strict TypeScript
Well-Tested - 100% domain layer test coverage
Prerequisites
Claude Code - Installation guide
Node.js 20+ - Download
Gemini API Key - Get one from Google AI Studio
Installation & Setup
Step 1: Clone and Build
Step 2: Add to Claude Code
Option A: Using the CLI (Recommended)
Option B: Manual Configuration
Edit your Claude Code settings file (~/.claude.json):
See Configuration for all available options and supported models.
Step 3: Verify Installation
Start Claude Code and verify the server is connected:
Then ask:
"What Gemini models are available?"
If configured correctly, Claude will use the list_gemini_models tool and show you the available models.
Using with Claude Code
Once installed, you can ask Claude to use Gemini in natural language. Here are some examples:
Get a Second Opinion
Compare Solutions
Leverage Gemini's Strengths
Check Token Usage Before Querying
Multi-turn Conversations
Configuration
Configure the server using environment variables:
Variable | Required | Default | Description |
| Yes | - | Your Gemini API key from Google AI Studio |
| Yes | - | Gemini model to use for queries |
| No |
| Request timeout in milliseconds |
| No |
| Log level ( |
Other MCP Clients
While this server is optimized for Claude Code, it works with any MCP-compatible client.
Claude Desktop
Add to your Claude Desktop configuration:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
Generic stdio Clients
This server uses stdio transport. Start it with:
The server communicates via stdin/stdout using the MCP protocol.
Development
Available Scripts
Command | Description |
| Compile TypeScript to JavaScript |
| Run in development mode with hot reload |
| Run the compiled server |
| Run all tests |
| Run tests with coverage report |
| Check code with ESLint |
| Fix ESLint issues automatically |
| Type-check without emitting files |
Project Structure
Tool Reference
Technical details for developers integrating with or extending the MCP tools.
query_gemini
Query Google's Gemini AI models for text generation, reasoning, and analysis tasks. The model is configured via the GEMINI_DEFAULT_MODEL environment variable.
Parameters:
Parameter | Type | Required | Default | Description |
| string | Yes | - | The prompt to send to Gemini (1-100,000 chars) |
| array | No | - | Previous conversation turns for multi-turn conversations |
| boolean | No |
| Stream response progressively |
History Array Item Schema:
Example Response:
list_gemini_models
List popular Gemini AI models that can be configured via the GEMINI_DEFAULT_MODEL environment variable.
Parameters: None
Example Response:
count_gemini_tokens
Count the number of tokens in a text string for the configured Gemini model.
Parameters:
Parameter | Type | Required | Default | Description |
| string | Yes | - | The text to count tokens for (1-1,000,000 chars) |
Example Response:
Architecture
This project follows Clean Architecture (Ports and Adapters) principles:
Domain Layer - Core business logic with zero external dependencies
Infrastructure Layer - External integrations (Gemini SDK, MCP SDK)
Strict Dependency Rule - Dependencies always point inward
For detailed architectural documentation, see ARCHITECTURE.md.
Troubleshooting
Claude Code Issues
"Server not found" or tools not appearing
Verify the MCP server is added:
claude mcp listCheck the path to
dist/app.jsis absolute and correctEnsure the project has been built:
npm run build
"GEMINI_API_KEY is required"
Verify your API key is set in the MCP configuration
Check with:
claude mcp listto see environment variables
Server crashes on startup
Check Node.js version:
node --version(must be 20+)Verify dependencies are installed:
npm install
API Issues
"Rate limit exceeded"
Gemini API has rate limits; wait and retry
Consider using
gemini-2.0-flashfor higher rate limits
"Content filtered" error
Gemini has content safety filters
Rephrase your prompt to avoid triggering filters
Streaming not working
Streaming is enabled by default
Set
stream: falsein your query if needed
Debug Mode
Enable debug logging for troubleshooting:
Contributing
Contributions are welcome! Please follow these steps:
Fork the repository
Create a feature branch:
git checkout -b feat/your-featureMake your changes following the code standards in CLAUDE.md
Run tests:
npm testRun linting:
npm run lintCommit with conventional commits:
git commit -m "feat: add new feature"Push and create a Pull Request
Code Standards
TypeScript strict mode required
All exported functions need explicit return types
Use
neverthrowResult pattern for error handlingValidate inputs with Zod at boundaries
100% test coverage for domain layer
License
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Acknowledgments
Claude Code - The CLI tool this server is built for
Model Context Protocol - The protocol enabling this integration
Google Gemini - The AI models powering this server