Skip to main content
Glama

Gemini Researcher

NPM Version NPM Downloads License: BSD-3 Claude

A lightweight, stateless MCP (Model Context Protocol) server that lets developer agents (Claude Code, GitHub Copilot) hand off deep repository analysis to the Gemini CLI. The server is read-only, returns structured JSON (as text content), and is designed to reduce the calling agent's context and model usage.

Status: v1 complete. Core features are stable, but still early days. Feedback welcome!

If this saved you tokens, ⭐ please consider giving it a star! :)

The primary goals:

  • Reduce agent context usage by letting Gemini CLI read large codebases locally and do its own research

  • Reduce calling-agent model usage by offloading heavy analysis to Gemini

  • Keep the server stateless and read-only for safety

Why use this?

Instead of copying entire files into your agent's context (burning tokens and cluttering the conversation), this server lets Gemini CLI read files directly from your project. Your agent sends a research query, Gemini reads and synthesizes using its large context window, and returns structured results. You save tokens, your agent stays focused, and complex codebase analysis becomes practical.

Verified clients: Claude Code, Cursor, VS Code (GitHub Copilot)

NOTE

It definitely works with other clients, but I haven't personally tested them yet. Please open an issue if you try it elsewhere!

Table of contents

Overview

Gemini Researcher accepts research-style queries over the MCP protocol and spawns the Gemini CLI in headless mode to analyze local files referenced with @path. Results are returned as formatted JSON strings for agent clients.

Runtime safety contract

Canonical runtime semantics are maintained in docs/runtime-contract.md.

Gemini Researcher enforces this invocation contract for analysis requests:

gemini [ -m <model> ] --output-format json --approval-mode default [--admin-policy <path>] -p "<prompt>"
  • The server uses -p/--prompt for explicit non-interactive headless execution.

  • The server does not use -y/--yolo in server-generated argv.

  • Read-only behavior is enforced via bundled admin policy by default.

  • Admin-policy strict enforcement can be relaxed with GEMINI_RESEARCHER_ENFORCE_ADMIN_POLICY=0 (or false|no|off).

Read-only policy behavior

  • Default mode is strict fail-closed enforcement.

  • The bundled policy denies known mutating tools (for example: write_file, replace, run_shell_command).

  • The policy is deny-list based. If Gemini introduces new mutating tool names in future releases, policy updates may be required.

  • Extensions remain enabled by design. This is convenient, but means policy enforcement should remain enabled in production.

Auth and health semantics

When you run health_check with includeDiagnostics: true, diagnostics include auth state and enforcement status.

authStatus

Meaning

health_check impact

configured

Auth confirmed (API key, Vertex, or successful CLI probe)

Eligible for ok

unauthenticated

Auth is definitively missing/invalid

degraded

unknown

Auth could not be confirmed due to ambiguous probe failure

degraded

health_check.status is:

  • ok only when Gemini is available, auth is configured, and strict read-only enforcement is satisfied (or intentionally relaxed by env toggle).

  • degraded for all setup/safety/auth uncertainty paths.

Prerequisites

  • Node.js 18+ installed

  • Gemini CLI installed: npm install -g @google/gemini-cli

  • Gemini CLI authenticated (recommended: gemini → Login with Google) or set GEMINI_API_KEY

Quick checks:

node --version
gemini --version

Quickstart

Step 1: Validate environment

Run the setup wizard to verify Gemini CLI is installed and authenticated:

npx gemini-researcher init

Step 2: Configure your MCP client

Standard config works in most of the tools:

{
  "mcpServers": {
    "gemini-researcher": {
      "command": "npx",
      "args": [
        "gemini-researcher"
      ]
    }
  }
}

Add to your VS Code MCP settings (create .vscode/mcp.json if needed):

{
  "servers": {
    "gemini-researcher": {
      "command": "npx",
      "args": [
        "gemini-researcher"
      ]
    }
  }
}

Option 1: Command line (recommended)

Local (user-wide) scope

# Add the MCP server via CLI
claude mcp add --transport stdio gemini-researcher -- npx gemini-researcher 

# Verify it was added
claude mcp list

Project scope

Navigate to your project directory, then run:

# Add the MCP server via CLI
claude mcp add --scope project --transport stdio gemini-researcher -- npx gemini-researcher

# Verify it was added
claude mcp list

Option 2: Manual configuration

Add to .mcp.json in your project root (project scope):

{
  "mcpServers": {
    "gemini-researcher": {
      "command": "npx",
      "args": [
        "gemini-researcher"
      ]
    }
  }
}

Or add to ~/.claude/settings.json for local scope.

After adding the server, restart Claude Code and use /mcp to verify the connection.

Go to Cursor Settings -> Tools & MCP -> Add a Custom MCP Server. Add the following configuration:

{
  "mcpServers": {
    "gemini-researcher": {
      "type": "stdio",
      "command": "npx",
      "args": [
        "gemini-researcher"
      ]
    }
  }
}
NOTE

The server automatically uses the directory where the IDE opened your workspace as the project root or where your terminal is. To analyze a different directory, optionally setPROJECT_ROOT:

Example

{
  "mcpServers": {
    "gemini-researcher": {
      "command": "npx",
      "args": [
        "gemini-researcher"
      ],
      "env": {
        "PROJECT_ROOT": "/path/to/your/project"
      }
    }
  }
}

Step 3: Restart your MCP client

Step 4: Test it

Ask your agent: "Use gemini-researcher to analyze the project."

Tools

All tools return structured JSON (as MCP text content). Large responses are chunked (~10KB per chunk) and cached for 1 hour.

Tool

Purpose

When to use

quick_query

Fast analysis with flash model

Quick questions about specific files or small code sections

deep_research

In-depth analysis with pro model

Complex multi-file analysis, architecture reviews, security audits

analyze_directory

Map directory structure

Understanding unfamiliar codebases, generating project overviews

validate_paths

Pre-check file paths

Verify files exist before running expensive queries

health_check

Diagnostics

Troubleshooting server/Gemini CLI issues

fetch_chunk

Get chunked responses

Retrieve remaining parts of large responses

Example workflows

Understanding a security vulnerability:

Agent: Use deep_research to analyze authentication flow across @src/auth and @src/middleware, focusing on security

Quick code explanation:

Agent: Use quick_query to explain the login flow in @src/auth.ts, be concise

Mapping an unfamiliar codebase:

Agent: Use analyze_directory on src/ with depth 3 to understand the project structure

quick_query

{
  "prompt": "Explain @src/auth.ts login flow",
  "focus": "security",
  "responseStyle": "concise"
}

deep_research

{
  "prompt": "Analyze authentication across @src/auth and @src/middleware",
  "focus": "architecture",
  "citationMode": "paths_only"
}

analyze_directory

{
  "path": "src",
  "depth": 3,
  "maxFiles": 200
}

validate_paths

{
  "paths": ["src/auth.ts", "README.md"]
}

health_check

{
  "includeDiagnostics": true
}

fetch_chunk

{
  "cacheKey": "cache_abc123",
  "chunkIndex": 2
}

Docker

A pre-built multi-platform Docker image is available on Docker Hub:

# Pull the image (works on Intel/AMD and Apple Silicon)
docker pull capybearista/gemini-researcher:latest

# Run the server (mount your project and provide API key)
docker run -i --rm \
  -e GEMINI_API_KEY="your-api-key" \
  -v /path/to/your/project:/workspace \
  capybearista/gemini-researcher:latest

For MCP client configuration with Docker:

{
  "mcpServers": {
    "gemini-researcher": {
      "command": "docker",
      "args": [
        "run", "-i", "--rm",
        "-e", "GEMINI_API_KEY",
        "-v", "/path/to/your/project:/workspace",
        "capybearista/gemini-researcher:latest"
      ],
      "env": {
        "GEMINI_API_KEY": "your-api-key-here"
      }
    }
  }
}
NOTE
  • The -i flag is required for stdio transport

  • The container mounts your project to /workspace (the project root)

  • Replace /path/to/your/project with your actual project path

  • Replace your-api-key with your actual Gemini API key (this is required for Docker usage)

Troubleshooting (common issues)

  • GEMINI_CLI_NOT_FOUND: Install Gemini CLI: npm install -g @google/gemini-cli

  • AUTH_MISSING: Run gemini, and authenticate or set GEMINI_API_KEY

  • AUTH_UNKNOWN: Auth could not be confirmed (often network/CLI probe failure). Verify gemini works interactively, then retry.

  • ADMIN_POLICY_MISSING: Reinstall package or verify policies/read-only-enforcement.toml exists in installed package.

  • ADMIN_POLICY_UNSUPPORTED: Upgrade Gemini CLI to v0.36.0+ (gemini --help should include --admin-policy).

  • GEMINI_RESEARCHER_ENFORCE_ADMIN_POLICY=0: Disables strict startup hard-fail policy enforcement. This weakens fail-closed guarantees.

  • .gitignore blocking files: Gemini respects .gitignore by default; toggle fileFiltering.respectGitIgnore in gemini /settings if you intentionally want ignored files included (note: this changes Gemini behavior globally)

  • PATH_NOT_ALLOWED: All @path references must resolve inside the configured project root (process.cwd() by default). Use validate_paths to pre-check paths.

  • QUOTA_EXCEEDED: Server retries with fallback models; if all tiers are exhausted, reduce scope (use quick_query) or wait for quota reset.

Contributing

Read the Contributing Guide to get started.

Quick links:

License

BSD-3-Clause License


Install Server
A
security – no known vulnerabilities
A
license - permissive license
A
quality - A tier

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/capyBearista/gemini-researcher'

If you have feedback or need assistance with the MCP directory API, please join our Discord server