Skip to main content
Glama

Gemini Researcher

NPM Version NPM Downloads License: BSD-3 Claude

A lightweight, stateless MCP (Model Context Protocol) server that lets developer agents (Claude Code, GitHub Copilot) delegate deep repository analysis to the Gemini CLI. The server is read-only, returns structured JSON (as text content), and is optimized to reduce the calling agent's context and model usage.

Status: v1 complete. Core features are stable, but still early days. Feedback welcome!

If this project extended the lifespan of your usage window, ⭐ please consider giving it a star! :)

Primary goals:

  • Reduce agent context usage by letting Gemini CLI read large codebases locally and do its own research

  • Reduce calling-agent model usage by offloading heavy analysis to Gemini

  • Keep the server stateless and read-only for safety

Why use this?

Instead of copying entire files into your agent's context (burning tokens and cluttering the conversation), this server lets Gemini CLI read files directly from your project. Your agent sends a research query, Gemini does the heavy lifting with its large context window, and returns structured results. You save tokens, your agent stays focused, and complex codebase analysis becomes practical.

Verified clients: Claude Code, Cursor, VS Code (GitHub Copilot)

NOTE

It definitely works with other clients, but I haven't personally tested them yet. Please open an issue if you try it elsewhere!

Table of contents

Overview

Gemini Researcher accepts research-style queries over the MCP protocol and spawns the Gemini CLI in headless, read-only mode to perform large-context analysis on local files referenced with @path. Results are returned as pretty-printed JSON strings suitable for programmatic consumption by agent clients.

Prerequisites

  • Node.js 18+ installed

  • Gemini CLI installed: npm install -g @google/gemini-cli

  • Gemini CLI authenticated (recommended: gemini → Login with Google) or set GEMINI_API_KEY

Quick checks:

node --version gemini --version

Quickstart

Step 1: Validate environment

Run the setup wizard to verify Gemini CLI is installed and authenticated:

npx gemini-researcher init

Step 2: Configure your MCP client

Standard config works in most of the tools:

{ "mcpServers": { "gemini-researcher": { "command": "npx", "args": [ "gemini-researcher" ] } } }

Add to your VS Code MCP settings (create .vscode/mcp.json if needed):

{ "servers": { "gemini-researcher": { "command": "npx", "args": [ "gemini-researcher" ] } } }

Option 1: Command line (recommended)

Local (user-wide) scope

# Add the MCP server via CLI claude mcp add --transport stdio gemini-researcher -- npx gemini-researcher # Verify it was added claude mcp list

Project scope

Navigate to your project directory, then run:

# Add the MCP server via CLI claude mcp add --scope project --transport stdio gemini-researcher -- npx gemini-researcher # Verify it was added claude mcp list

Option 2: Manual configuration

Add to .mcp.json in your project root (project scope):

{ "mcpServers": { "gemini-researcher": { "command": "npx", "args": [ "gemini-researcher" ] } } }

Or add to ~/.claude/settings.json for local scope.

After adding the server, restart Claude Code and use /mcp to verify the connection.

Go to Cursor Settings -> Tools & MCP -> Add a Custom MCP Server. Add the following configuration:

{ "mcpServers": { "gemini-researcher": { "type": "stdio", "command": "npx", "args": [ "gemini-researcher" ] } } }
NOTE

The server automatically uses the directory where the IDE opened your workspace as the project root or where your terminal is. To analyze a different directory, optionally setPROJECT_ROOT:

Example

{ "mcpServers": { "gemini-researcher": { "command": "npx", "args": [ "gemini-researcher" ], "env": { "PROJECT_ROOT": "/path/to/your/project" } } } }

Step 3: Restart your MCP client

Step 4: Test it

Ask your agent: "Use gemini-researcher to analyze the project."

Tools

All tools return structured JSON (as MCP text content). Large responses are automatically chunked (~10KB per chunk) and cached for 1 hour.

Tool

Purpose

When to use

quick_query

Fast analysis with flash model

Quick questions about specific files or small code sections

deep_research

In-depth analysis with pro model

Complex multi-file analysis, architecture reviews, security audits

analyze_directory

Map directory structure

Understanding unfamiliar codebases, generating project overviews

validate_paths

Pre-check file paths

Verify files exist before running expensive queries

health_check

Diagnostics

Troubleshooting server/Gemini CLI issues

fetch_chunk

Get chunked responses

Retrieve remaining parts of large responses

Example workflows

Understanding a security vulnerability:

Agent: Use deep_research to analyze authentication flow across @src/auth and @src/middleware, focusing on security

Quick code explanation:

Agent: Use quick_query to explain the login flow in @src/auth.ts, be concise

Mapping an unfamiliar codebase:

Agent: Use analyze_directory on src/ with depth 3 to understand the project structure

quick_query

{ "prompt": "Explain @src/auth.ts login flow", "focus": "security", "responseStyle": "concise" }

deep_research

{ "prompt": "Analyze authentication across @src/auth and @src/middleware", "focus": "architecture", "citationMode": "paths_only" }

analyze_directory

{ "path": "src", "depth": 3, "maxFiles": 200 }

validate_paths

{ "paths": ["src/auth.ts", "README.md"] }

health_check

{ "includeDiagnostics": true }

fetch_chunk

{ "cacheKey": "cache_abc123", "chunkIndex": 2 }

Docker

You can also run gemini-researcher in a Docker container:

# Build the image docker build -t gemini-researcher . # Run the server (mount your project and provide API key) docker run -i \ -e GEMINI_API_KEY="your-api-key" \ -v /path/to/your/project:/workspace \ gemini-researcher

For MCP client configuration with Docker:

{ "mcpServers": { "gemini-researcher": { "command": "docker", "args": [ "run", "-i", "--rm", "-e", "GEMINI_API_KEY", "-v", "/path/to/your/project:/workspace", "gemini-researcher" ], "env": { "GEMINI_API_KEY": "your-api-key-here" } } } }
NOTE

The-i flag is required for stdio transport. The container mounts your project to /workspace which becomes the project root.

Troubleshooting (common issues)

  • GEMINI_CLI_NOT_FOUND: Install Gemini CLI: npm install -g @google/gemini-cli

  • AUTH_MISSING: Run gemini, and authenticate or set GEMINI_API_KEY

  • .gitignore blocking files: Gemini respects .gitignore by default; toggle fileFiltering.respectGitIgnore in gemini /settings if you intentionally want ignored files included (note: this changes Gemini behavior globally)

  • PATH_NOT_ALLOWED: All @path references must resolve inside the configured project root (process.cwd() by default). Use validate_paths to pre-check paths.

  • QUOTA_EXCEEDED: Server retries with fallback models; if all tiers are exhausted, reduce scope (use quick_query) or wait for quota reset.

Contributing

We welcome contributions! Please read the Contributing Guide to get started.

Quick links:

License

BSD-3-Clause License


Install Server
A
security – no known vulnerabilities
A
license - permissive license
A
quality - confirmed to work

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/capyBearista/gemini-researcher'

If you have feedback or need assistance with the MCP directory API, please join our Discord server