Allows interaction with Google's Gemini AI through the Gemini CLI tool, supporting various query options including model selection, sandbox mode, debug mode, and file context inclusion.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@MCP Gemini CLIsummarize this article about AI safety in 3 bullet points"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
MCP Gemini CLI
A Model Context Protocol (MCP) server that provides integration with Google's Gemini CLI tool.
Overview
This MCP server allows you to interact with Google's Gemini AI through the Gemini CLI tool using the Model Context Protocol. It provides a standardized interface for querying Gemini with various options and configurations.
Related MCP server: MCP Gemini API Server
Prerequisites
Node.js 18+
Google Gemini CLI tool installed and configured
MCP-compatible client (like Claude Desktop)
Installation
Clone this repository:
Install dependencies:
Build the project:
Usage
As an MCP Server
Add to your MCP client configuration (e.g., Claude Desktop):
Available Tools
gemini_query
Query Google Gemini AI with various options.
Parameters:
prompt(required): The prompt to send to Geminimodel(optional): Model to use (default: gemini-2.5-pro)sandbox(optional): Run in sandbox modedebug(optional): Enable debug modeall_files(optional): Include all files in contextyolo(optional): Automatically accept all actions
Example:
Development
Scripts
npm run build- Build the TypeScript projectnpm run start- Start the MCP servernpm run dev- Start in development mode with auto-reloadnpm run clean- Clean build artifacts
Project Structure
Contributing
Fork the repository
Create a feature branch
Make your changes
Add tests if applicable
Submit a pull request
License
MIT License - see LICENSE file for details.