local-only server
The server can only run on the client’s local machine because it depends on local resources.
Integrations
Provides access to Google's Gemini AI models via the Google AI SDK, enabling text generation, function calling, chat sessions, file handling, and content caching capabilities.
Provides a JavaScript interface to Gemini AI capabilities through the compiled server.
Implements a server that runs on Node.js to interface with Google's Gemini models, providing a consistent tool-based interface for AI interactions.
MCP Gemini Server
Overview
This project provides a dedicated MCP (Model Context Protocol) server that wraps the @google/genai
SDK. It exposes Google's Gemini model capabilities as standard MCP tools, allowing other LLMs (like Cline) or MCP-compatible systems to leverage Gemini's features as a backend workhorse.
This server aims to simplify integration with Gemini models by providing a consistent, tool-based interface managed via the MCP standard.
Features
- Core Generation: Standard (
gemini_generateContent
) and streaming (gemini_generateContentStream
) text generation. - Function Calling: Enables Gemini models to request the execution of client-defined functions (
gemini_functionCall
). - Stateful Chat: Manages conversational context across multiple turns (
gemini_startChat
,gemini_sendMessage
,gemini_sendFunctionResult
). - File Handling: Upload, list, retrieve, and delete files using the Gemini API.
- Caching: Create, list, retrieve, update, and delete cached content to optimize prompts.
Prerequisites
- Node.js (v18 or later)
- An API Key from Google AI Studio (https://aistudio.google.com/app/apikey).
- Important: The File Handling and Caching APIs are only compatible with Google AI Studio API keys and are not supported when using Vertex AI credentials. This server does not currently support Vertex AI authentication.
Installation & Setup
Installing via Smithery
To install Gemini Server for Claude Desktop automatically via Smithery:
Installing Manually
- Clone/Place Project: Ensure the
mcp-gemini-server
project directory is accessible on your system. - Install Dependencies: Navigate to the project directory in your terminal and run:Copy
- Build Project: Compile the TypeScript source code:This command uses the TypeScript compiler (Copy
tsc
) and outputs the JavaScript files to the./dist
directory (as specified byoutDir
intsconfig.json
). The main server entry point will bedist/server.js
. - Configure MCP Client: Add the server configuration to your MCP client's settings file (e.g.,
cline_mcp_settings.json
for Cline/VSCode, orclaude_desktop_config.json
for Claude Desktop App). Replace/path/to/mcp-gemini-server
with the actual path on your system andYOUR_API_KEY
with your Google AI Studio key.Copy - Restart MCP Client: Restart your MCP client application (e.g., VS Code with Cline extension, Claude Desktop App) to load the new server configuration. The MCP client will manage starting and stopping the server process.
Configuration
The server uses environment variables for configuration, passed via the env
object in the MCP settings:
GOOGLE_GEMINI_API_KEY
(Required): Your API key obtained from Google AI Studio.GOOGLE_GEMINI_MODEL
(Optional): Specifies a default Gemini model name (e.g.,gemini-1.5-flash
,gemini-1.0-pro
). If set, tools that require a model name (likegemini_generateContent
,gemini_startChat
, etc.) will use this default when themodelName
parameter is omitted in the tool call. This simplifies client calls when primarily using one model. If this environment variable is not set, themodelName
parameter becomes required for those tools. See the Google AI documentation for available model names.
Available Tools
This server provides the following MCP tools. Parameter schemas are defined using Zod for validation and description.
Note on Optional Parameters: Many tools accept complex optional parameters (e.g., generationConfig
, safetySettings
, toolConfig
, history
, functionDeclarations
, contents
). These parameters are typically objects or arrays whose structure mirrors the types defined in the underlying @google/genai
SDK. For the exact structure and available fields within these complex parameters, please refer to:
1. The corresponding src/tools/*Params.ts
file in this project.
2. The official Google AI JS SDK Documentation.
Core Generation
gemini_generateContent
- Description: Generates non-streaming text content from a prompt.
- Required Params:
prompt
(string) - Optional Params:
modelName
(string),generationConfig
(object),safetySettings
(array)
gemini_generateContentStream
- Description: Generates text content via streaming. (Note: Current implementation uses a workaround and collects all chunks before returning the full text).
- Required Params:
prompt
(string) - Optional Params:
modelName
(string),generationConfig
(object),safetySettings
(array)
Function Calling
gemini_functionCall
- Description: Sends a prompt and function declarations to the model, returning either a text response or a requested function call object (as a JSON string).
- Required Params:
prompt
(string),functionDeclarations
(array) - Optional Params:
modelName
(string),generationConfig
(object),safetySettings
(array),toolConfig
(object)
Stateful Chat
gemini_startChat
- Description: Initiates a new stateful chat session and returns a unique
sessionId
.\n * Required Params: None - Optional Params:
modelName
(string),history
(array),tools
(array),generationConfig
(object),safetySettings
(array)
- Description: Initiates a new stateful chat session and returns a unique
gemini_sendMessage
- Description: Sends a message within an existing chat session.\n * Required Params:
sessionId
(string),message
(string) - Optional Params:
generationConfig
(object),safetySettings
(array),tools
(array),toolConfig
(object)
- Description: Sends a message within an existing chat session.\n * Required Params:
gemini_sendFunctionResult
- Description: Sends the result of a function execution back to a chat session.\n * Required Params:
sessionId
(string),functionResponses
(array) - Optional Params:
generationConfig
(object),safetySettings
(array)
- Description: Sends the result of a function execution back to a chat session.\n * Required Params:
File Handling (Google AI Studio Key Required)
gemini_uploadFile
- Description: Uploads a file from a local path.\n Required Params:
filePath
(string - must be an absolute path)\n Optional Params:displayName
(string),mimeType
(string)
- Description: Uploads a file from a local path.\n Required Params:
gemini_listFiles
- Description: Lists previously uploaded files.\n * Required Params: None
- Optional Params:
pageSize
(number),pageToken
(string - Note:pageToken
may not be reliably returned currently).
gemini_getFile
- Description: Retrieves metadata for a specific uploaded file.\n * Required Params:
fileName
(string - e.g.,files/abc123xyz
)
- Description: Retrieves metadata for a specific uploaded file.\n * Required Params:
gemini_deleteFile
- Description: Deletes an uploaded file.\n * Required Params:
fileName
(string - e.g.,files/abc123xyz
)
- Description: Deletes an uploaded file.\n * Required Params:
Caching (Google AI Studio Key Required)
gemini_createCache
- Description: Creates cached content for compatible models (e.g.,
gemini-1.5-flash
).\n * Required Params:contents
(array) - Optional Params:
modelName
(string),displayName
(string),systemInstruction
(object),ttl
(string - e.g., '3600s')
- Description: Creates cached content for compatible models (e.g.,
gemini_listCaches
- Description: Lists existing cached content.\n * Required Params: None
- Optional Params:
pageSize
(number),pageToken
(string - Note:pageToken
may not be reliably returned currently).
gemini_getCache
- Description: Retrieves metadata for specific cached content.\n * Required Params:
cacheName
(string - e.g.,cachedContents/abc123xyz
)
- Description: Retrieves metadata for specific cached content.\n * Required Params:
gemini_updateCache
- Description: Updates metadata (TTL, displayName) for cached content.\n * Required Params:
cacheName
(string) - Optional Params:
ttl
(string),displayName
(string)
- Description: Updates metadata (TTL, displayName) for cached content.\n * Required Params:
gemini_deleteCache
- Description: Deletes cached content.\n * Required Params:
cacheName
(string - e.g.,cachedContents/abc123xyz
)
- Description: Deletes cached content.\n * Required Params:
Usage Examples
Here are examples of how an MCP client (like Cline) might call these tools using the use_mcp_tool
format:
Example 1: Simple Content Generation (Using Default Model)
Example 2: Content Generation (Specifying Model & Config)
Example 3: Starting and Continuing a Chat
Start Chat:
(Assume response contains sessionId: "some-uuid-123"
)
Send Message:
Example 4: Uploading a File
Error Handling
The server aims to return structured errors using the MCP standard McpError
type when tool execution fails. This object typically contains:
code
: AnErrorCode
enum value indicating the type of error (e.g.,InvalidParams
,InternalError
,PermissionDenied
,NotFound
).message
: A human-readable description of the error.details
: (Optional) An object potentially containing more specific information from the underlying Gemini SDK error (like safety block reasons or API error messages) for troubleshooting.
Common Error Scenarios:
- Invalid API Key: Often results in an
InternalError
with details indicating an authentication failure. - Invalid Parameters: Results in
InvalidParams
(e.g., missing required field, wrong data type). - Safety Blocks: May result in
InternalError
with details indicatingSAFETY
as the block reason or finish reason. - File/Cache Not Found: May result in
NotFound
orInternalError
depending on how the SDK surfaces the error. - Rate Limits: May result in
ResourceExhausted
orInternalError
.
Check the message
and details
fields of the returned McpError
for specific clues when troubleshooting.
Development
This server follows the standard MCP server structure outlined in the project's .clinerules
and internal documentation. Key patterns include:
- Service Layer (
src/services
): Encapsulates interactions with the@google/genai
SDK, keeping it decoupled from MCP specifics. - Tool Layer (
src/tools
): Adapts service layer functionality to MCP tools, handling parameter mapping and error translation. - Zod Schemas (
src/tools/*Params.ts
): Used extensively for defining tool parameters, providing validation, and generating detailed descriptions crucial for LLM interaction. - Configuration (
src/config
): Centralized management viaConfigurationManager
. - Types (
src/types
): Clear TypeScript definitions.
Known Issues
gemini_generateContentStream
uses a workaround, collecting all chunks before returning the full text. True streaming to the MCP client is not yet implemented.gemini_listFiles
andgemini_listCaches
may not reliably returnnextPageToken
due to limitations in iterating the SDK's Pager object.gemini_uploadFile
requires absolute file paths when run from the server environment.- File Handling & Caching APIs are not supported on Vertex AI, only Google AI Studio API keys.
You must be authenticated.
Tools
A dedicated server that wraps Google's Gemini AI models in a Model Context Protocol (MCP) interface, allowing other LLMs and MCP-compatible systems to access Gemini's capabilities like content generation, function calling, chat, and file handling through standardized tools.
- Overview
- Features
- Prerequisites
- Installation & Setup
- Configuration
- Available Tools
- Usage Examples
- Error Handling
- Development
- Known Issues