Skip to main content
Glama

Gemini MCP Server

by bobvasic

πŸ€– Gemini MCP Server

Customized for Warp Terminal

Model Context Protocol Server for Google Gemini API

Optimized for Modern Terminal Workflows

License: MIT Node.js MCP Gemini

Seamlessly integrate Google Gemini AI into your Warp terminal workflow

Features β€’ Installation β€’ Configuration β€’ Usage β€’ API Reference β€’ Contributing


πŸ“‹ Overview

Gemini MCP Server is a Model Context Protocol implementation that brings Google Gemini's powerful AI capabilities directly into your Warp terminal. Built with enterprise-grade standards, this server enables conversational AI, multi-turn dialogues, and intelligent code analysis through simple, well-defined tools.

Why Use This?

✨ Zero Configuration - Works out of the box with Warp terminal
πŸ”’ Secure by Default - API keys stored in environment variables
⚑ High Performance - Optimized for rapid response times
🎯 Purpose-Built Tools - Three focused tools for maximum utility
🌐 Open Source - MIT licensed, community-driven development


✨ Features

πŸ’¬ Single-Turn Chat

gemini_chat

Quick, stateless conversations with Gemini. Perfect for one-off questions, code generation, or content creation.

πŸ”„ Multi-Turn Conversations

gemini_chat_with_history

Maintain context across multiple exchanges. Build complex dialogues and iterative problem-solving sessions.

πŸ” Code Analysis

gemini_analyze_code

Deep code review, bug detection, optimization suggestions, and explanations. Supports multiple programming languages.


πŸš€ Installation

Prerequisites

  • Node.js β‰₯ 18.0.0

  • npm β‰₯ 9.0.0

  • Warp Terminal (latest version)

  • Google Gemini API Key (Get one here)

Quick Start

# Clone or download the repository git clone https://github.com/bobvasic/gemini-mcp-server.git cd gemini-mcp-server # Install dependencies npm install # Run automated setup (recommended) ./setup.sh YOUR_GEMINI_API_KEY

Manual Installation

# Install dependencies npm install # Make scripts executable chmod +x index.js setup.sh

βš™οΈ Configuration

Method 1: Automated Setup (Recommended)

./setup.sh YOUR_GEMINI_API_KEY

This script automatically:

  • Creates Warp MCP configuration

  • Sets up your API key securely

  • Validates the installation

Method 2: Manual Configuration

Step 1: Get Your API Key

  1. Visit Google AI Studio

  2. Sign in with your Google account

  3. Generate a new API key

  4. Copy the key (keep it secure!)

Step 2: Configure Warp

Create or edit ~/.config/warp/mcp.json:

{ "mcpServers": { "gemini": { "command": "node", "args": ["${HOME}/gemini-mcp-server/index.js"], "env": { "GEMINI_API_KEY": "your-actual-api-key-here" } } } }

⚠️ Security Best Practice: Never commit your API key to version control. Use environment variables for production deployments.

Step 3: Restart Warp

Completely quit and restart Warp terminal for changes to take effect.


πŸ’‘ Usage

Testing the Server

Verify your installation works:

export GEMINI_API_KEY="your-api-key" cd gemini-mcp-server npm start

You should see: Gemini MCP Server running on stdio

Tool Examples

1. Basic Conversation

{ "tool": "gemini_chat", "arguments": { "message": "Explain the difference between async/await and Promises in JavaScript", "temperature": 0.7, "max_tokens": 2048 } }

Use Cases:

  • Quick questions and answers

  • Code generation

  • Content writing

  • Brainstorming ideas

2. Contextual Dialogue

{ "tool": "gemini_chat_with_history", "arguments": { "messages": [ { "role": "user", "parts": [{"text": "What is dependency injection?"}] }, { "role": "model", "parts": [{"text": "Dependency injection is a design pattern..."}] }, { "role": "user", "parts": [{"text": "Show me an example in TypeScript"}] } ], "temperature": 0.8 } }

Use Cases:

  • Technical tutorials

  • Iterative problem-solving

  • Learning sessions

  • Complex debugging

3. Code Analysis

{ "tool": "gemini_analyze_code", "arguments": { "code": "function processUser(data) {\n return data.name.toUpperCase();\n}", "language": "javascript", "analysis_type": "bugs" } }

Analysis Types:

  • bugs - Find errors and potential issues

  • optimize - Performance and best practices

  • explain - Detailed code explanation

  • review - Comprehensive assessment


πŸ“š API Reference

Tool: gemini_chat

Description: Single-turn conversation with Gemini

Parameters:

Parameter

Type

Required

Default

Description

message

string

Yes

-

Your prompt or question

temperature

number

No

1.0

Creativity (0.0-2.0)

max_tokens

number

No

8192

Maximum response length

Example Response:

{ "content": [ { "type": "text", "text": "Here's a detailed explanation..." } ] }

Tool: gemini_chat_with_history

Description: Multi-turn conversation with context preservation

Parameters:

Parameter

Type

Required

Default

Description

messages

array

Yes

-

Conversation history

temperature

number

No

1.0

Creativity (0.0-2.0)

Message Format:

{ role: "user" | "model", parts: [{ text: string }] }

Tool: gemini_analyze_code

Description: AI-powered code analysis and review

Parameters:

Parameter

Type

Required

Default

Description

code

string

Yes

-

Code to analyze

language

string

No

-

Programming language

analysis_type

enum

No

review

bugs, optimize, explain, review

Supported Languages: JavaScript, TypeScript, Python, Go, Rust, Java, C++, Ruby, PHP, Swift, Kotlin, and more


πŸ”§ Advanced Configuration

Environment Variables

# Required export GEMINI_API_KEY="your-api-key" # Optional (for custom deployments) export MCP_SERVER_PORT="3000" # If running as HTTP server export LOG_LEVEL="info" # debug, info, warn, error

Model Selection

To use different Gemini models, edit index.js:

const model = genAI.getGenerativeModel({ model: "gemini-2.5-pro", // or "gemini-2.0-flash-exp" generationConfig: { temperature: 1.0, maxOutputTokens: 8192, }, });

Available Models:

  • gemini-2.0-flash-exp - Fast, efficient (default)

  • gemini-2.5-pro - Most capable (when available)

  • gemini-pro - Balanced performance


πŸ› Troubleshooting

Common Issues

Solution:

export GEMINI_API_KEY="your-key" # Or add to ~/.bashrc or ~/.zshrc for persistence echo 'export GEMINI_API_KEY="your-key"' >> ~/.bashrc

Checklist:

  1. Verify ~/.config/warp/mcp.json exists and is valid JSON

  2. Ensure paths in config use absolute paths or ${HOME}

  3. Completely quit and restart Warp (not just close window)

  4. Check Warp logs: Settings β†’ Advanced β†’ View Logs

Possible causes:

  • Invalid API key

  • API key not activated

  • Billing not enabled on Google Cloud

  • Rate limits exceeded

Solution: Verify your API key at Google AI Studio

Debug steps:

export GEMINI_API_KEY="your-key" node index.js 2>&1 | tee debug.log # Then check debug.log for errors

πŸ”’ Security

Best Practices

  1. Never commit API keys - Use environment variables

  2. Rotate keys regularly - Generate new keys every 90 days

  3. Use key restrictions - Limit keys to specific APIs in Google Cloud Console

  4. Monitor usage - Check Google Cloud Console for unexpected activity

  5. Audit logs - Review MCP server logs periodically

Reporting Security Issues

Please report security vulnerabilities to info@cyberlinksec.com. Do not create public issues for security concerns.

See SECURITY.md for our full security policy.


🀝 Contributing

We welcome contributions! Here's how you can help:

Development Setup

# Clone the repo git clone https://github.com/bobvasic/gemini-mcp-server.git cd gemini-mcp-server # Install dependencies npm install # Run in development mode export GEMINI_API_KEY="your-key" npm start

Contribution Guidelines

  1. Fork the repository

  2. Create a feature branch (git checkout -b feature/amazing-feature)

  3. Commit your changes (git commit -m 'Add amazing feature')

  4. Push to the branch (git push origin feature/amazing-feature)

  5. Open a Pull Request

Code Standards

  • Follow existing code style

  • Add tests for new features

  • Update documentation

  • Ensure no hardcoded credentials

  • Use meaningful commit messages


πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

MIT License - Copyright (c) 2025 Gemini MCP Server Contributors

πŸ™ Acknowledgments


πŸ“Š Stats & Metrics

  • Response Time: < 2s average

  • Uptime: 99.9% (dependent on Google API)

  • Models Supported: 3+ Gemini variants

  • Languages: JavaScript/TypeScript

  • MCP Version: 1.0.4


πŸ—ΊοΈ Roadmap

  • Add streaming response support

  • Implement token usage tracking

  • Add conversation history persistence

  • Support for image inputs

  • Multi-language documentation

  • Docker container support

  • Health check endpoints

  • Prometheus metrics export


Built with ❀️ for the developer community

If this project helped you, please ⭐ star the repository!

Documentation β€’ Issues β€’ Discussions

-
security - not tested
A
license - permissive license
-
quality - not tested

local-only server

The server can only run on the client's local machine because it depends on local resources.

Integrates Google Gemini AI into Warp terminal workflows through three focused tools: single-turn chat for quick questions, multi-turn conversations with context preservation, and AI-powered code analysis with bug detection and optimization suggestions.

  1. Customized for Warp Terminal
    1. Model Context Protocol Server for Google Gemini API
    2. Optimized for Modern Terminal Workflows
    3. πŸ“‹ Overview
    4. ✨ Features
    5. πŸš€ Installation
    6. βš™οΈ Configuration
    7. πŸ’‘ Usage
    8. πŸ“š API Reference
    9. πŸ”§ Advanced Configuration
    10. πŸ› Troubleshooting
    11. πŸ”’ Security
    12. 🀝 Contributing
    13. πŸ“„ License
    14. πŸ™ Acknowledgments
    15. πŸ“Š Stats & Metrics
    16. πŸ—ΊοΈ Roadmap

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/bobvasic/gemini-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server