The README references using Git to clone repositories for setup
The README references GitHub as the source for cloning the repository
The server produces detailed markdown reports with findings, sources, and reliability assessments
The README contains a Mermaid diagram to explain the workflow of the deep research process
The README mentions using a fork of Firecrawl that uses SearXNG as the search backend to avoid using a searchapi key
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Deep Research MCP Serverresearch the latest advancements in quantum computing with depth 3 and breadth 2"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
DISCLAIMER
This repo is an experiment on agent coding. 95% of the code is written by LLM's Open Deep Research MCP Server
An AI-powered research assistant that performs deep, iterative research on any topic. It combines search engines, web scraping, and AI to explore topics in depth and generate comprehensive reports. Available as a Model Context Protocol (MCP) tool or standalone CLI. Look at exampleout.md to see what a report might look like.
Quick Start
Clone and install:
git clone https://github.com/Ozamatash/deep-research
cd deep-research
npm installSet up environment in
.env.local:
# Copy the example environment file
cp .env.example .env.localBuild:
# Build the server
npm run buildRun the cli version:
npm run startTest MCP Server with Claude Desktop:
Follow the guide thats at the bottom of server quickstart to add the server to Claude Desktop:
https://modelcontextprotocol.io/quickstart/server
For remote servers: Streamable HTTP
npm run start:httpServer runs on http://localhost:3000/mcp without session management.
Related MCP server: Octagon Deep Research MCP
Features
Performs deep, iterative research by generating targeted search queries
Controls research scope with depth (how deep) and breadth (how wide) parameters
Evaluates source reliability with detailed scoring (0-1) and reasoning
Prioritizes high-reliability sources (≥0.7) and verifies less reliable information
Generates follow-up questions to better understand research needs
Produces detailed markdown reports with findings, sources, and reliability assessments
Available as a Model Context Protocol (MCP) tool for AI agents
For now MCP version doesn't ask follow up questions
Natural-language source preferences (avoid listicles, forums, affiliate reviews, specific domains)
Model Selection (OpenAI, Anthropic, Google, xAI)
Pick a provider and model per run.
CLI: you will be prompted for provider and model. Example:
openai+gpt-5.2.MCP/HTTP: pass
model, e.g.openai:gpt-5.2(also acceptsopenai/gpt-5.2).
Set the corresponding API key in .env.local:
OPENAI_API_KEY=...
ANTHROPIC_API_KEY=...
GOOGLE_API_KEY=...
XAI_API_KEY=...Optionally set default models per provider:
OPENAI_MODEL=gpt-5.2
ANTHROPIC_MODEL=claude-opus-4-5
GOOGLE_MODEL=gemini-3-pro-preview
XAI_MODEL=grok-4-1-fast-reasoningIf you use a non-default OpenAI endpoint:
OPENAI_ENDPOINT=https://api.openai.com/v1How It Works
flowchart TB
subgraph Input
Q[User Query]
B[Breadth Parameter]
D[Depth Parameter]
FQ[Feedback Questions]
end
subgraph Research[Deep Research]
direction TB
SQ[Generate SERP Queries]
SR[Search]
RE[Source Reliability Evaluation]
PR[Process Results]
end
subgraph Results[Research Output]
direction TB
L((Learnings with
Reliability Scores))
SM((Source Metadata))
ND((Next Directions:
Prior Goals,
New Questions))
end
%% Main Flow
Q & FQ --> CQ[Combined Query]
CQ & B & D --> SQ
SQ --> SR
SR --> RE
RE --> PR
%% Results Flow
PR --> L
PR --> SM
PR --> ND
%% Depth Decision and Recursion
L & ND --> DP{depth > 0?}
DP -->|Yes| SQ
%% Final Output
DP -->|No| MR[Markdown Report]
%% Styling
classDef input fill:#7bed9f,stroke:#2ed573,color:black
classDef process fill:#70a1ff,stroke:#1e90ff,color:black
classDef output fill:#ff4757,stroke:#ff6b81,color:black
classDef results fill:#a8e6cf,stroke:#3b7a57,color:black,width:150px,height:150px
class Q,B,D,FQ input
class SQ,SR,RE,PR process
class MR output
class L,SM,ND resultsAdvanced Setup
Using Local Firecrawl (Free Option)
Instead of using the Firecrawl API, you can run a local instance. You can use the official repo or my fork which uses searXNG as the search backend to avoid using a searchapi key:
Set up local Firecrawl:
git clone https://github.com/Ozamatash/localfirecrawl
cd localfirecrawl
# Follow setup in localfirecrawl READMEUpdate
.env.local:
FIRECRAWL_BASE_URL="http://localhost:3002"Optional: Observability
Add observability to track research flows, queries, and results using Langfuse:
# Add to .env.local
LANGFUSE_PUBLIC_KEY="your_langfuse_public_key"
LANGFUSE_SECRET_KEY="your_langfuse_secret_key"The app works normally without observability if no Langfuse keys are provided.
License
MIT License