Incorporates Google's Gemini model (gemini-2.0-flash-001) as one of the research agents for information gathering and analysis.
Leverages OpenAI models (gpt-4o-search-preview and gpt-4o-mini-search-preview) as research agents for conducting information searches and analysis.
Uses Perplexity models (sonar-deep-research, sonar-pro, sonar-reasoning-pro, sonar-reasoning) for advanced research tasks with specialized search capabilities.
OpenRouter Agents MCP Server
[MAJOR UPDATE – August 12, 2025] An intelligent MCP server that orchestrates GPT‑5 / Gemini / Claude agents to research in parallel, indexing as it goes (PGlite + vectors), then synthesizes consensus with strict, URL‑backed citations.
- Killer features
- Plan → parallelize → synthesize workflow with bounded parallelism
- Dynamic model catalog; supports Anthropic Sonnet‑4 and OpenAI GPT‑5 family
- Built‑in semantic KB (PGlite + pgvector) with backup, export/import, health, and reindex tools
- Lightweight web helpers: quick search and page fetch for context
- Robust streaming (SSE), per‑connection auth, clean logs
What’s new (v1.2)
- Local hybrid indexer (BM25 + optional vector rerank) with MCP tools:
index_texts
,index_url
,search_index
. - Auto‑indexing during research: every saved report and fetched page can be indexed on the fly.
- Prompt/resource registration (MCP):
planning_prompt
,synthesis_prompt
, andmcp_spec_links
. - Compact prompts option: minimize tokens while enforcing explicit URL citations and confidence scoring.
- Planning model fallbacks and simplified routing per strategy.
Quick start
- Prereqs
- Node 18+ (20 LTS recommended), npm, Git, OpenRouter API key
- Install
- Configure (.env)
- Run
- STDIO (for Cursor/VS Code MCP):
- HTTP/SSE (local daemon):
Tools (high‑value)
- Research:
conduct_research
,research_follow_up
- Knowledge base:
get_past_research
,list_research_history
,get_report_content
- DB ops:
backup_db
(tar.gz),export_reports
,import_reports
,db_health
,reindex_vectors
- Models:
list_models
- Web:
search_web
,fetch_url
- Indexer (new):
index_texts
,index_url
,search_index
,index_status
Notes
- Data lives locally under
PGLITE_DATA_DIR
(default./researchAgentDB
). Backups are tarballs in./backups
. - Use
list_models
to discover current provider capabilities and ids.
Architecture at a glance
See docs/diagram-architecture.mmd
(Mermaid). Render to SVG with Mermaid CLI if installed:
How it differs from typical “agent chains”:
- Not just hardcoded handoffs; the plan is computed, then parallel agents search, then a synthesis step reasons over consensus, contradictions, and gaps.
- The system indexes what it reads during research, so subsequent queries get faster/smarter.
- Guardrails shape attention: explicit URL citations, [Unverified] labelling, and confidence scoring.
Minimal‑token prompt strategy
- Compact mode strips preambles to essential constraints; everything else is inferred.
- Enforced rules: explicit URL citations, no guessing IDs/URLs, confidence labels.
- Short tool specs: use concise param names and rely on server defaults.
Common user journeys
- “Give me an executive briefing on MCP status as of July 2025.”
- Server plans sub‑queries, fetches authoritative sources, synthesizes with citations.
- Indexed outputs make related follow‑ups faster.
- “Find vision‑capable models and route images gracefully.”
/models
discovered and filtered, router template generated, fallback to text models.
- “Compare orchestration patterns for bounded parallelism.”
- Pulls OTel/Airflow/Temporal docs, produces a MECE synthesis and code pointers.
Cursor IDE usage
- Add this server in Cursor MCP settings pointing to
node src/server/mcpServer.js --stdio
. - Use the new prompts (
planning_prompt
,synthesis_prompt
) directly in Cursor to scaffold tasks.
FAQ (quick glance)
- How does it avoid hallucinations?
- Strict citation rules, [Unverified] labels, retrieval of past work, on‑the‑fly indexing.
- Can I disable features?
- Yes, via env flags listed above.
- Does it support streaming?
- Yes, SSE for HTTP; stdio for MCP.
Command Map (quick reference)
- Start (stdio):
npm run stdio
- Start (HTTP/SSE):
npm start
- Generate examples:
npm run gen:examples
- List models: MCP
list_models { refresh:false }
- Research (compact):
conduct_research { q:"<query>", cost:"low", aud:"intermediate", fmt:"report", src:true }
- Get past research:
get_past_research { query:"<query>", limit:5 }
- Index URL (if enabled):
index_url { url:"https://..." }
- Search index (if enabled):
search_index { query:"<q>", limit:10 }
This server cannot be installed
remote-capable server
The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.
A Model Context Protocol server that enables conversational LLMs to delegate complex research tasks to specialized AI agents powered by various OpenRouter models, coordinated by a Claude orchestrator.
- 🚀 New Beta Branch (03-29-2025)
- 🌟 Support This Project
- Prerequisites
- Features
- How It Works
- Installation (Node.js / Standard)
- Cline / VS Code MCP Integration (Recommended)
- Available Models
- Customization
- Alternative Installation: HTTP/SSE for Claude Desktop App
- Persistence & Data Storage
- Troubleshooting
- Advanced Configuration
- Testing Tools
- License
Related MCP Servers
- -securityFlicense-qualityA Model Context Protocol server that enables Claude users to access specialized OpenAI agents (web search, file search, computer actions) and a multi-agent orchestrator through the MCP protocol.Last updated -9Python
- -security-license-qualityA Model Context Protocol server that enables intelligent task delegation from advanced AI agents like Claude 3.7 to cost-effective LLMs, providing a comprehensive suite of tools spanning cognitive memory, browser automation, Excel manipulation, database interactions, and document processing.Last updated -98PythonMIT License
- AsecurityAlicenseAqualityA Model Context Protocol server that scans and exposes AI-related dotfiles and configuration files to LLM agents, helping them understand project context and guidelines.Last updated -JavaScriptMIT License
- -securityAlicense-qualityA sophisticated server that coordinates multiple LLMs (Claude, Gemini, etc.) using the Model Context Protocol to enhance reasoning capabilities through strategies like progressive deep dive and consensus-based approaches.Last updated -PythonMIT License