Integrates Brave Search to provide search results and content for web research and data extraction tasks.
Integrates DuckDuckGo to provide search results and content for web research and data extraction tasks.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@mcp-open-webresearchPerform a deep research on the latest breakthroughs in fusion energy"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
mcp-open-webresearch
Proxy-aware Model Context Protocol (MCP) server for web searching and content extraction.
Designed to be robust and compatible with various network environments, including those using SOCKS and HTTP proxies.
Features
Dynamic Engine Discovery: Engines are loaded dynamically from the
src/infrastructure/search/directory. Adding a new engine requires only a new folder and file, without modifying core logic.Multi-Engine Search: Aggregates results from Bing, DuckDuckGo, and Brave.
Deep Research (
search_deep): Recursive research agent that performs multi-round searching, citation extraction, and answer synthesis.Ephemeral Downloads: In-memory storage for Deep Search reports using a 100MB bounded LRU cache with 10-minute auto-expiration.
Centralized Throttling: Rate limit management (search and pagination cooldowns) across prioritized engines.
Smart Fetch: Configurable fetching utility (
impit) with two operational profiles:Browser Mode: Includes modern browser headers (User-Agent, Client Hints) for compatibility with sites requiring browser-standard requests.
Standard Mode: Uses a minimal HTTP client profile for environments where browser-like identification is not required.
Result Sampling: Optional LLM-based filtering to assess result relevance.
Content Extraction: Webpage visiting and markdown extraction tool (
visit_webpage) using a headless browser.Proxy Support: Full support for SOCKS5, HTTPS, and HTTP proxies.
Configuration: Configurable via environment variables and CLI arguments.
Deployment: Docker images available for production and testing.
Credits
This project includes work from the following contributors:
Manav Kundra: Initial implementation of the server.
Aasee: Added multiple search engines and Docker support.
mzxrai: Core logic for the
visit_pagetool.
Installation & Quick Start
Docker (Recommended)
Latest Stable Release:
Test/Debug Image:
Local Installation
To run the server locally (e.g., in Claude Desktop or Cline):
Replace/absolute/path/to/project with your actual project path.
Configuration (
Remote Server (Streamable HTTP)
Endpoint: http://localhost:3000/mcp
Configuration:
Client Configuration & Timeouts
Deep Search processes can take several minutes to complete. Some MCP clients (like Cline and RooCode) have a default timeout of 60 seconds, which will cause the operation to fail.
You MUST configure a higher timeout in your client settings.
Cline (cline_mcp_settings.json)
Add the "timeout" parameter (in seconds). Recommended: 1800 (30 minutes).
RooCode (mcp_settings.json)
RooCode also respects the timeout parameter.
Antigravity / Windsurf (mcp_config.json)
Antigravity / Windsurf handles long-running tools natively, but if they let you configure a timeout, it is best practice to do so.
Developer Guide: Adding New Engines
To add a new search engine:
Create Directory:
src/infrastructure/search/{engine_name}/Implement Logic: Create
{engine_name}.tswith the fetching/parsing logic.Export Interface: Create
index.tsexporting theSearchEngineinterface:import type { SearchEngine } from "../../../types/search.js"; import { searchMyEngine } from "./my_engine.js"; import { isThrottled } from "../../throttle.js"; // Optional export const engine: SearchEngine = { name: "my_engine", search: searchMyEngine, isRateLimited: () => isThrottled("my_engine"), };Restart: The server will automatically discover and load the new engine.
Build and Run
Locally
Docker
Testing
Unit & E2E Tests
Uses Vitest for testing. Includes dynamic contract tests for all discovered engines.
Compliance Tests
Verifies the "Smart Fetch" behavior (User-Agent headers) usage using a local mock server.
Infrastructure Validation
Validates Docker image builds and basic functionality.
Available Scripts
Command | Description |
| Compiles TypeScript to |
| Recompiles on file changes. |
| Launches MCP inspector UI. |
| Runs the compiled server. |
| Runs local tests. |
| Runs tests in Docker container. |
| Validates docker images. |
| Generates self-signed certificates for testing. |
Configuration
Configuration is managed via Environment Variables or CLI arguments.
Variable | Default | Description |
|
| Server port. |
|
| Public URL for download links. |
|
| Enable CORS. |
|
| Allowed CORS origin. |
|
| Default engines list. |
|
| Enable proxy support. |
| - | HTTP Proxy URL. |
| - | HTTPS Proxy URL. |
| - | SOCKS5 Proxy URL (Highest Priority). |
|
| Enable result sampling. |
|
| Prefer external API over IDE. |
| - | External LLM API base URL. |
| - | External LLM API key. |
| - | External LLM model name. |
|
| Timeout for external LLM calls. |
|
| Max research iterations. |
|
| Results per engine per round. |
|
| Threshold to stop research early. |
|
| Max URLs to visit for citations. |
|
| Download expiration time (minutes). |
|
| Log debug output to stdout. |
|
| Log debug output to file. |
CLI Arguments
CLI arguments override environment variables.
Argument | Description |
| Port to listen on. |
| Enable debug logging (stdout). |
| Enable debug logging (file). |
| Enable CORS. |
| Proxy URL (http, https, socks5). |
| Comma-separated list of engines. |
| Enable sampling. |
| Disable sampling. |
Search Pipeline & Scoring
The server uses a multi-stage pipeline to aggregate and refine search results:
1. Multi-Engine Retrieval
Concurrent requests are dispatched to all configured engines (Bing, Brave, DuckDuckGo). Raw results are collected into a single pool.
2. Consensus Scoring & Deduplication
Results are grouped by their canonical URL (protocol/www-agnostic hash).
Deduplication: Multiple entries for the same URL are merged.
Scoring: A
consensusScoreis calculated for each unique URL:Inverted Rank Sum: Sum of inverted ranks ($1/rank$) across engines. Higher placement results in a higher score.
Engine Boost: Multiplies the sum by the number of unique engines that identified the URL. This prioritizes multi-provider agreement.
Sorting: The final list is sorted by the calculated
consensusScorein descending order.
3. LLM Sampling (Optional)
If SAMPLING=true, the top-ranked results are sent to an LLM to evaluate semantic relevance to the query.
Filtering: Sampling acts as a binary filter. It removes results identified as irrelevant (spam, off-topic).
Final Set: The original consensus scores are preserved. Only the composition of the list changes.
LLM Sampling Strategy
When sampling is enabled, the server follows a tiered resolution logic to select which LLM to use:
SKIP_IDE_SAMPLING | IDE Available | API Configured | Resolution |
| ✅ | ✅ | IDE Sampling |
| ✅ | ❌ | IDE Sampling |
| ❌ | ✅ | External API |
| ✅ OR ❌ | ✅ | External API |
| ❌ | ❌ | No Sampling |
You can use a model without API key, theLLM_API_KEY value is optional.
Deep Search Compatibility: The search_deep tool strictly requires LLM capability (either via IDE or API). If neither is available, the tool will appear in the MCP list but will throw an error upon execution.
Tools Documentation
search_deep
Recursive research agent for deep investigation. Searches multiple sources, extracts citations, and synthesizes a comprehensive answer.
Requires LLM Sampling capability.
Input:
Output: A structured Markdown report including a reference list. If configured, a Download URL at the top of the output permits downloading the results as a file.
search_web
Performs a search across configured engines.
Input:
visit_webpage
Visits a URL and returns markdown content.
Input:
set_engines
Updates default search engines.
Input:
get_engines
Returns configured search engines.
set_sampling
Enables or disables result sampling.
Input:
get_sampling
Returns current sampling status.
📥 Ephemeral Downloads
Deep Search results are served via an in-memory buffer cache.
Storage: Reports are stored as
Bufferobjects in the C++ heap to avoid V8 string memory limits.Expiration: Each individual entry expires exactly 10 minutes after creation. Access operations (
get) do not extend the time-to-live (TTL).Memory Safety: The cache is bounded by a 100MB ceiling. When the limit is reached, a Least Recently Used (LRU) eviction policy removes the oldest entries.
URL Configuration: Link generation depends on the
PUBLIC_URLvariable to ensure accessible download endpoints in proxied environments.
Roadmap
Deep Search: Recursive research and synthesis engine.
Keyless GitHub Adapter: Implement adapter for GitHub content access.
License
Apache License 2.0. See LICENSE.