Skip to main content
Glama
xumingjun5208

Gemini MCP Server

Gemini MCP Server

Python MCP License

A Model Context Protocol (MCP) server that provides Google Gemini AI capabilities to MCP-compatible clients like Claude Desktop and Claude Code.

Overview

This MCP server acts as a bridge between MCP clients and Google Gemini models, enabling:

  • Multi-turn conversations with session management

  • File and image analysis with glob pattern support

  • Automatic model selection based on content length

  • Deep thinking mode with reasoning output

  • Google Search integration for up-to-date information

Prerequisites

1. AIStudioProxyAPI Backend

This MCP server requires AIStudioProxyAPI as the backend service.

# Clone and setup AIStudioProxyAPI git clone https://github.com/CJackHwang/AIstudioProxyAPI.git cd AIstudioProxyAPI poetry install poetry run python launch_camoufox.py --headless

The API will be available at http://127.0.0.1:2048 by default.

2. uv Package Manager

# Install uv (recommended) curl -LsSf https://astral.sh/uv/install.sh | sh

Installation

# Clone this repository git clone https://github.com/YOUR_USERNAME/aistudio-gemini-mcp.git cd aistudio-gemini-mcp # Install dependencies uv sync

Configuration

Environment Variables

Variable

Default

Description

GEMINI_API_BASE_URL

http://127.0.0.1:2048

AIStudioProxyAPI endpoint

GEMINI_API_KEY

(empty)

Optional API key

GEMINI_PROJECT_ROOT

$PWD

Root directory for file resolution

Claude Desktop / Claude Code

Add to ~/.claude/mcp.json:

{ "mcpServers": { "gemini": { "command": "uv", "args": ["run", "--directory", "/path/to/aistudio-gemini-mcp", "python", "server.py"], "env": { "GEMINI_API_BASE_URL": "http://127.0.0.1:2048" } } } }

Tools

gemini_chat

Send a message to Google Gemini with optional file attachments.

Parameter

Type

Required

Description

prompt

string

Yes

Message to send (1-100,000 chars)

file

list[string]

No

File paths or glob patterns

session_id

string

No

Session ID (

"last"

for recent)

model

string

No

Override model selection

system_prompt

string

No

System context

temperature

float

No

Sampling temperature (0.0-2.0)

max_tokens

int

No

Max response tokens

response_format

enum

No

"markdown"

or

"json"

Examples:

# Simple query gemini_chat(prompt="Explain quantum computing") # With file gemini_chat(prompt="Review this code", file=["main.py"]) # With image gemini_chat(prompt="Describe this", file=["photo.png"]) # Continue conversation gemini_chat(prompt="Tell me more", session_id="last") # Multiple files gemini_chat(prompt="Analyze", file=["src/**/*.py"])

gemini_list_models

List available Gemini models.

Parameter

Type

Required

Description

filter_text

string

No

Filter models by name

response_format

enum

No

"markdown"

or

"json"

Model Selection

Auto-selects model based on content length:

Content Size

Model

≤ 8,000 chars

gemini-3-pro-preview

> 8,000 chars

gemini-2.5-pro

Fallback

gemini-2.5-flash

Features

Session Management

  • Automatic session creation

  • Use "last" to continue recent conversation

  • LRU eviction (max 50 sessions)

File Support

  • Images: PNG, JPG, JPEG, GIF, WebP, BMP

  • Text: Any text-based file with auto-encoding detection

  • Glob patterns: *.py, src/**/*.ts, etc.

Built-in Capabilities

  • reasoning_effort: high - Deep thinking mode

  • google_search - Web search integration

  • Automatic retry with model fallback

Running Standalone

# Start the MCP server uv run python server.py

Project Structure

aistudio-gemini-mcp/ ├── server.py # MCP server implementation ├── pyproject.toml # Project configuration ├── uv.lock # Dependency lock file ├── README.md # This file ├── LICENSE # MIT License └── mcp_config_example.json

License

MIT License - see LICENSE for details.

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

-
security - not tested
A
license - permissive license
-
quality - not tested

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/xumingjun5208/aistudio-gemini-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server