Provides access to Google Gemini AI models for text generation with configurable model selection and temperature settings.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Gemini MCP Serverexplain quantum computing in simple terms"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Gemini MCP Server
An MCP (Model Context Protocol) server that provides access to Google Gemini AI models.
Quick Start
Install dependencies:
Create a
.envfile with your Gemini API key:
Start the server:
The server will run at http://localhost:3333/mcp
Available Tool
gemini_generateText
Generate text using Google Gemini models.
Parameters:
prompt(string, required): The text promptmodel(string, optional): Gemini model to use (default:gemini-2.5-pro)temperature(number, optional): Temperature for generation, 0-2 (default: 1)
Returns:
text: Generated text responsemodel: Model usedtemperature: Temperature setting used
Usage Example
Testing
Run the included test client (requires server to be running):
Configuration
Environment variables:
GEMINI_API_KEY(required): Your Google Gemini API keyPORT(optional): Server port (default: 3333)
Deployment
Deploy to Google Cloud Run with automated scripts:
For detailed deployment instructions, see DEPLOYMENT.md.
License
ISC