Skip to main content
Glama

MCP Server Gemini

by gurr-i

๐Ÿค– MCP Server Gemini

License: MIT TypeScript Node.js

A state-of-the-art Model Context Protocol (MCP) server that provides seamless integration with Google's Gemini AI models. This server enables Claude Desktop and other MCP-compatible clients to leverage the full power of Gemini's advanced AI capabilities.

โœจ Features

๐Ÿง  Latest Gemini Models

  • Gemini 2.5 Pro - Most capable thinking model for complex reasoning

  • Gemini 2.5 Flash - Fast thinking model with best price/performance

  • Gemini 2.0 Series - Latest generation models with advanced features

  • Gemini 1.5 Series - Proven, reliable models for production use

๐Ÿš€ Advanced Capabilities

  • ๐Ÿง  Thinking Models - Gemini 2.5 series with step-by-step reasoning

  • ๐Ÿ” Google Search Grounding - Real-time web information integration

  • ๐Ÿ“Š JSON Mode - Structured output with schema validation

  • ๐ŸŽฏ System Instructions - Behavior customization and control

  • ๐Ÿ‘๏ธ Vision Support - Image analysis and multimodal capabilities

  • ๐Ÿ’ฌ Conversation Memory - Context preservation across interactions

๐Ÿ› ๏ธ Production Ready

  • TypeScript - Full type safety and modern development

  • Comprehensive Error Handling - Robust error management and recovery

  • Rate Limiting - Built-in protection against API abuse

  • Detailed Logging - Comprehensive monitoring and debugging

  • Input Validation - Secure parameter validation with Zod

  • Retry Logic - Automatic retry with exponential backoff

๐Ÿš€ Quick Start

Prerequisites

Installation

Option 1: Global Installation (Recommended)

npm install -g mcp-server-gemini

Option 2: Local Development

git clone https://github.com/gurr-i/mcp-server-gemini-pro.git cd mcp-server-gemini-pro npm install npm run build

Configuration

1. Set up your API key

Option A: Environment Variable

export GEMINI_API_KEY="your_api_key_here"

Option B: .env file

echo "GEMINI_API_KEY=your_api_key_here" > .env

2. Configure Claude Desktop

Add to your claude_desktop_config.json:

For Global Installation:

{ "mcpServers": { "gemini": { "command": "mcp-server-gemini", "env": { "GEMINI_API_KEY": "your_api_key_here" } } } }

For Local Installation:

{ "mcpServers": { "gemini": { "command": "node", "args": ["/path/to/mcp-server-gemini-pro/dist/enhanced-stdio-server.js"], "env": { "GEMINI_API_KEY": "your_api_key_here" } } } }

3. Restart Claude Desktop

Close and restart Claude Desktop completely for changes to take effect.

๐Ÿ’ก Usage Examples

Once configured, you can use Gemini through Claude Desktop with natural language:

Basic Text Generation

"Use Gemini to explain quantum computing in simple terms" "Generate a creative story about AI using Gemini 2.5 Pro"

Advanced Features

"Use Gemini with JSON mode to extract key points from this text" "Use Gemini with grounding to get the latest news about AI" "Generate a Python function using Gemini's thinking capabilities"

Image Analysis

"Analyze this image with Gemini" (attach image) "What's in this screenshot using Gemini vision?"

Development Tasks

"Use Gemini to review this code and suggest improvements" "Generate comprehensive tests for this function using Gemini"

โš™๏ธ Configuration

Environment Variables

The server can be configured using environment variables or a .env file:

Required Configuration

# Google AI Studio API Key (required) GEMINI_API_KEY=your_api_key_here

Optional Configuration

# Logging level (default: info) # Options: error, warn, info, debug LOG_LEVEL=info # Enable performance metrics (default: false) ENABLE_METRICS=false # Rate limiting configuration RATE_LIMIT_ENABLED=true # Enable/disable rate limiting (default: true) RATE_LIMIT_REQUESTS=100 # Max requests per window (default: 100) RATE_LIMIT_WINDOW=60000 # Time window in ms (default: 60000 = 1 minute) # Request timeout in milliseconds (default: 30000 = 30 seconds) REQUEST_TIMEOUT=30000 # Environment mode (default: production) NODE_ENV=production

Environment Setup

Development Environment

# .env for development GEMINI_API_KEY=your_api_key_here NODE_ENV=development LOG_LEVEL=debug RATE_LIMIT_ENABLED=false REQUEST_TIMEOUT=60000

Production Environment

# .env for production GEMINI_API_KEY=your_api_key_here NODE_ENV=production LOG_LEVEL=warn RATE_LIMIT_ENABLED=true RATE_LIMIT_REQUESTS=100 RATE_LIMIT_WINDOW=60000 REQUEST_TIMEOUT=30000 ENABLE_METRICS=true

Claude Desktop Configuration

Configuration File Locations

OS

Path

macOS

~/Library/Application Support/Claude/claude_desktop_config.json

Windows

%APPDATA%\Claude\claude_desktop_config.json

Linux

~/.config/Claude/claude_desktop_config.json

Basic Configuration

{ "mcpServers": { "gemini": { "command": "mcp-server-gemini", "env": { "GEMINI_API_KEY": "your_api_key_here" } } } }

Advanced Configuration

{ "mcpServers": { "gemini": { "command": "mcp-server-gemini", "env": { "GEMINI_API_KEY": "your_api_key_here", "LOG_LEVEL": "info", "RATE_LIMIT_REQUESTS": "200", "REQUEST_TIMEOUT": "45000" } } } }

Local Development Configuration

{ "mcpServers": { "gemini": { "command": "node", "args": ["/path/to/mcp-server-gemini-pro/dist/enhanced-stdio-server.js"], "cwd": "/path/to/mcp-server-gemini-pro", "env": { "GEMINI_API_KEY": "your_api_key_here", "NODE_ENV": "development", "LOG_LEVEL": "debug" } } } }

๐Ÿ› ๏ธ Available Tools

Tool

Description

Key Features

generate_text

Generate text with advanced features

Thinking models, JSON mode, grounding

analyze_image

Analyze images using vision models

Multi-modal understanding, detailed analysis

count_tokens

Count tokens for cost estimation

Accurate token counting for all models

list_models

List all available Gemini models

Real-time model availability and features

embed_text

Generate text embeddings

High-quality vector representations

get_help

Get usage help and documentation

Self-documenting with examples

๐Ÿ“Š Model Comparison

Model

Context Window

Features

Best For

Speed

gemini-2.5-pro

2M tokens

Thinking, JSON, Grounding

Complex reasoning, coding

Slower

gemini-2.5-flash

โญ

1M tokens

Thinking, JSON, Grounding

General purpose

Fast

gemini-2.5-flash-lite

1M tokens

Thinking, JSON

High-throughput tasks

Fastest

gemini-2.0-flash

1M tokens

JSON, Grounding

Standard tasks

Fast

gemini-2.0-flash-lite

1M tokens

JSON

Simple tasks

Fastest

gemini-2.0-pro-experimental

2M tokens

JSON, Grounding

Experimental features

Medium

gemini-1.5-pro

2M tokens

JSON

Legacy support

Medium

gemini-1.5-flash

1M tokens

JSON

Legacy support

Fast

๐Ÿ”ง Development

Prerequisites

  • Node.js 16+ (Download)

  • npm 7+ (comes with Node.js)

  • Git for version control

  • Google AI Studio API Key (Get one here)

Setup

# Clone the repository git clone https://github.com/gurr-i/mcp-server-gemini-pro.git cd mcp-server-gemini-pro # Install dependencies npm install # Set up environment variables cp .env.example .env # Edit .env and add your GEMINI_API_KEY

Available Scripts

Development

npm run dev # Start development server with hot reload npm run dev:watch # Start with file watching (nodemon) npm run build # Build for production npm run build:watch # Build with watch mode npm run clean # Clean build directory

Testing

npm test # Run all tests npm run test:watch # Run tests in watch mode npm run test:coverage # Run tests with coverage report npm run test:integration # Run integration tests (requires API key)

Code Quality

npm run lint # Lint TypeScript code npm run lint:fix # Fix linting issues automatically npm run format # Format code with Prettier npm run format:check # Check code formatting npm run type-check # Run TypeScript type checking npm run validate # Run all quality checks (lint + test + type-check)

Release & Distribution

npm run prepack # Prepare package for publishing npm run release # Build, validate, and publish to npm

Project Structure

mcp-server-gemini/ โ”œโ”€โ”€ src/ # Source code โ”‚ โ”œโ”€โ”€ config/ # Configuration management โ”‚ โ”‚ โ””โ”€โ”€ index.ts # Environment config with Zod validation โ”‚ โ”œโ”€โ”€ utils/ # Utility modules โ”‚ โ”‚ โ”œโ”€โ”€ logger.ts # Structured logging system โ”‚ โ”‚ โ”œโ”€โ”€ errors.ts # Custom error classes & handling โ”‚ โ”‚ โ”œโ”€โ”€ validation.ts # Input validation with Zod โ”‚ โ”‚ โ””โ”€โ”€ rateLimiter.ts # Rate limiting implementation โ”‚ โ”œโ”€โ”€ enhanced-stdio-server.ts # Main MCP server implementation โ”‚ โ””โ”€โ”€ types.ts # TypeScript type definitions โ”œโ”€โ”€ tests/ # Test suite โ”‚ โ”œโ”€โ”€ unit/ # Unit tests โ”‚ โ”‚ โ”œโ”€โ”€ config.test.ts # Configuration tests โ”‚ โ”‚ โ”œโ”€โ”€ validation.test.ts # Validation tests โ”‚ โ”‚ โ””โ”€โ”€ errors.test.ts # Error handling tests โ”‚ โ”œโ”€โ”€ integration/ # Integration tests โ”‚ โ”‚ โ””โ”€โ”€ gemini-api.test.ts # Real API integration tests โ”‚ โ””โ”€โ”€ setup.ts # Test setup and utilities โ”œโ”€โ”€ docs/ # Documentation โ”‚ โ”œโ”€โ”€ api.md # API reference โ”‚ โ”œโ”€โ”€ configuration.md # Configuration guide โ”‚ โ””โ”€โ”€ troubleshooting.md # Troubleshooting guide โ”œโ”€โ”€ scripts/ # Build and utility scripts โ”‚ โ”œโ”€โ”€ build.sh # Production build script โ”‚ โ”œโ”€โ”€ dev.sh # Development script โ”‚ โ””โ”€โ”€ test.sh # Test execution script โ”œโ”€โ”€ .github/workflows/ # GitHub Actions CI/CD โ”‚ โ”œโ”€โ”€ ci.yml # Continuous integration โ”‚ โ””โ”€โ”€ release.yml # Automated releases โ”œโ”€โ”€ dist/ # Built output (generated) โ”œโ”€โ”€ coverage/ # Test coverage reports (generated) โ””โ”€โ”€ node_modules/ # Dependencies (generated)

๐Ÿงช Testing

Test Suite Overview

The project includes comprehensive testing with unit tests, integration tests, and code coverage reporting.

Running Tests

All Tests

npm test # Run all tests (unit tests only by default) npm run test:watch # Run tests in watch mode for development npm run test:coverage # Run tests with coverage report

Unit Tests

npm test -- --testPathPattern=unit # Run only unit tests npm test -- --testNamePattern="config" # Run specific test suites

Integration Tests

Integration tests require a valid GEMINI_API_KEY and make real API calls:

# Set API key and run integration tests GEMINI_API_KEY=your_api_key_here npm run test:integration # Or set in .env file and run npm run test:integration

Test Coverage

npm run test:coverage # Generate coverage report open coverage/lcov-report/index.html # View coverage report (macOS)

Test Structure

Unit Tests (tests/unit/)

  • Configuration Tests: Environment variable validation, config loading

  • Validation Tests: Input validation, schema validation, sanitization

  • Error Handling Tests: Custom error classes, error recovery, retry logic

  • Utility Tests: Logger, rate limiter, helper functions

Integration Tests (tests/integration/)

  • Gemini API Tests: Real API calls to test connectivity and functionality

  • Model Testing: Verify all supported models work correctly

  • Feature Testing: JSON mode, grounding, embeddings, token counting

Writing Tests

Test File Structure

// tests/unit/example.test.ts import { describe, it, expect, beforeEach, afterEach } from '@jest/globals'; import { YourModule } from '../../src/your-module.js'; describe('YourModule', () => { beforeEach(() => { // Setup before each test }); afterEach(() => { // Cleanup after each test }); it('should do something', () => { // Test implementation expect(result).toBe(expected); }); });

Custom Matchers

The test suite includes custom Jest matchers:

expect(response).toBeValidMCPResponse(); // Validates MCP response format

Test Configuration

Tests are configured in jest.config.js with:

  • TypeScript Support: Full ES modules and TypeScript compilation

  • Coverage Thresholds: Minimum 70% coverage required

  • Test Timeout: 30 seconds for integration tests

  • Setup Files: Automatic test environment setup

๐Ÿณ Docker Deployment

Using Docker

Build and Run

# Build the Docker image docker build -t mcp-server-gemini . # Run the container docker run -d \ --name mcp-server-gemini \ -e GEMINI_API_KEY=your_api_key_here \ -e LOG_LEVEL=info \ mcp-server-gemini

Using Docker Compose

# Create .env file with your API key echo "GEMINI_API_KEY=your_api_key_here" > .env # Start the service docker-compose up -d # View logs docker-compose logs -f # Stop the service docker-compose down

Development with Docker

# Start development environment docker-compose --profile dev up # This mounts source code for live reloading

Environment-Specific Deployments

Production Deployment

# Production build docker build --target production -t mcp-server-gemini:prod . # Run with production settings docker run -d \ --name mcp-server-gemini-prod \ --restart unless-stopped \ -e GEMINI_API_KEY=your_api_key_here \ -e NODE_ENV=production \ -e LOG_LEVEL=warn \ -e RATE_LIMIT_ENABLED=true \ -e ENABLE_METRICS=true \ mcp-server-gemini:prod

Health Checks

# Check container health docker ps docker logs mcp-server-gemini # Manual health check docker exec mcp-server-gemini node -e "console.log('Health check passed')"

๐Ÿš€ Deployment Options

1. npm Global Installation

# Install globally npm install -g mcp-server-gemini # Run directly GEMINI_API_KEY=your_key mcp-server-gemini

2. Local Installation

# Clone and build git clone https://github.com/gurr-i/mcp-server-gemini-pro.git cd mcp-server-gemini-pro npm install npm run build # Run locally GEMINI_API_KEY=your_key npm start

3. Docker Deployment

# Using Docker Hub (when published) docker run -e GEMINI_API_KEY=your_key mcp-server-gemini-pro:latest # Using local build docker build -t mcp-server-gemini-pro . docker run -e GEMINI_API_KEY=your_key mcp-server-gemini-pro

4. Process Manager (PM2)

# Install PM2 npm install -g pm2 # Create ecosystem file cat > ecosystem.config.js << EOF module.exports = { apps: [{ name: 'mcp-server-gemini', script: './dist/enhanced-stdio-server.js', env: { NODE_ENV: 'production', GEMINI_API_KEY: 'your_api_key_here', LOG_LEVEL: 'info' } }] } EOF # Start with PM2 pm2 start ecosystem.config.js pm2 save pm2 startup

๐Ÿ”ง Troubleshooting

Common Issues

1. Server Won't Start

# Check if API key is set echo $GEMINI_API_KEY # Verify .env file exists and is readable cat .env | grep GEMINI_API_KEY # Check file permissions ls -la .env chmod 600 .env

2. API Key Issues

# Test API key manually curl -H "Content-Type: application/json" \ -d '{"contents":[{"parts":[{"text":"Hello"}]}]}' \ -X POST "https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent?key=YOUR_API_KEY"

3. Claude Desktop Integration

# Verify config file location (macOS) ls -la ~/Library/Application\ Support/Claude/claude_desktop_config.json # Validate JSON syntax cat claude_desktop_config.json | jq . # Check server installation which mcp-server-gemini npm list -g mcp-server-gemini

4. Rate Limiting

# Temporarily disable rate limiting export RATE_LIMIT_ENABLED=false # Increase limits export RATE_LIMIT_REQUESTS=1000 export RATE_LIMIT_WINDOW=60000

Debug Mode

# Enable debug logging export LOG_LEVEL=debug npm run dev # Or for production export LOG_LEVEL=debug npm start

Getting Help

๐Ÿ”’ Security

API Key Security

  • Never commit API keys to version control

  • Use environment variables or secure secret management

  • Rotate keys regularly for production use

  • Use different keys for development and production

Rate Limiting

  • Enable rate limiting in production (RATE_LIMIT_ENABLED=true)

  • Configure appropriate limits based on your usage patterns

  • Monitor API usage to prevent quota exhaustion

Input Validation

  • All inputs are automatically validated and sanitized

  • XSS and injection protection built-in

  • Schema validation for all tool parameters

Container Security

  • Runs as non-root user in Docker

  • Read-only filesystem with minimal privileges

  • Security scanning in CI/CD pipeline

๐Ÿ“š Documentation

๐Ÿค Contributing

We welcome contributions! Please see our Contributing Guide for details.

Development Workflow

  1. Fork the repository

  2. Create a feature branch

  3. Make your changes

  4. Add tests

  5. Run npm run validate

  6. Submit a pull request

๐Ÿ“„ License

MIT License - see LICENSE file for details.

๐Ÿ™ Acknowledgments

  • Google AI for the Gemini API

  • Anthropic for the Model Context Protocol

  • The open-source community for inspiration and feedback

๐Ÿ“ž Support


Deploy Server
-
security - not tested
A
license - permissive license
-
quality - not tested

hybrid server

The server is able to function both locally and remotely, depending on the configuration or use case.

A Model Context Protocol server that enables Claude Desktop and other MCP-compatible clients to leverage Google's Gemini AI models with features like thinking models, Google Search grounding, JSON mode, and vision support.

  1. โœจ Features
    1. ๐Ÿง  Latest Gemini Models
    2. ๐Ÿš€ Advanced Capabilities
    3. ๐Ÿ› ๏ธ Production Ready
  2. ๐Ÿš€ Quick Start
    1. Prerequisites
    2. Installation
    3. Configuration
  3. ๐Ÿ’ก Usage Examples
    1. Basic Text Generation
    2. Advanced Features
    3. Image Analysis
    4. Development Tasks
  4. โš™๏ธ Configuration
    1. Environment Variables
    2. Environment Setup
    3. Claude Desktop Configuration
  5. ๐Ÿ› ๏ธ Available Tools
    1. ๐Ÿ“Š Model Comparison
      1. ๐Ÿ”ง Development
        1. Prerequisites
        2. Setup
        3. Available Scripts
        4. Project Structure
      2. ๐Ÿงช Testing
        1. Test Suite Overview
        2. Running Tests
        3. Test Structure
        4. Writing Tests
        5. Test Configuration
      3. ๐Ÿณ Docker Deployment
        1. Using Docker
        2. Environment-Specific Deployments
      4. ๐Ÿš€ Deployment Options
        1. 1. npm Global Installation
        2. 2. Local Installation
        3. 3. Docker Deployment
        4. 4. Process Manager (PM2)
      5. ๐Ÿ”ง Troubleshooting
        1. Common Issues
        2. Debug Mode
        3. Getting Help
      6. ๐Ÿ”’ Security
        1. API Key Security
        2. Rate Limiting
        3. Input Validation
        4. Container Security
      7. ๐Ÿ“š Documentation
        1. ๐Ÿค Contributing
          1. Development Workflow
        2. ๐Ÿ“„ License
          1. ๐Ÿ™ Acknowledgments
            1. ๐Ÿ“ž Support

              Related MCP Servers

              • -
                security
                A
                license
                -
                quality
                Model Context Protocol (MCP) server implementation that enables Claude Desktop to interact with Google's Gemini AI models.
                Last updated -
                226
                MIT License
                • Apple
                • Linux
              • -
                security
                -
                license
                -
                quality
                An MCP server implementation that allows using Google's Gemini AI models (specifically Gemini 1.5 Pro) through Claude or other MCP clients via the Model Context Protocol.
                Last updated -
                1
              • -
                security
                F
                license
                -
                quality
                A Model Context Protocol server that enables Claude Desktop to interact with Google's Gemini 2.5 Pro Experimental AI model, with features like Google Search integration and token usage reporting.
                Last updated -
                3

              View all related MCP servers

              MCP directory API

              We provide all the information about MCP servers via our MCP API.

              curl -X GET 'https://glama.ai/api/mcp/v1/servers/gurr-i/mcp-server-gemini-pro'

              If you have feedback or need assistance with the MCP directory API, please join our Discord server