mcp-research
Provides web search capabilities via the Brave Search API as the primary high-quality search tier for the web_search tool.
Enables web search via the DuckDuckGo library and HTML scraper, serving as automatic fallback tiers when the Brave API is unavailable or rate-limited.
Enables AI-powered content processing including query rewriting, page summarization, and research synthesis using configurable Ollama models.
1. Click on "Install Server".
2. Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
3. In the chat, type `@` followed by the MCP server name and your instructions, e.g., "`@mcp-research` research recent advances in renewable energy storage"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a [step-by-step guide with screenshots](https://glama.ai/blog/2025-07-08-how-to-install-and-use-mcp-servers).mcp-research
A standalone MCP (Model Context Protocol) server providing web research tools. Three battle-tested tools for AI assistants: search the web, fetch & convert pages to markdown, and run compound multi-source research — all via the MCP stdio protocol.
Tools
Tool | Description |
| 3-tier search cascade: Brave API → DuckDuckGo → HTML scraper |
| Fetch any URL → clean markdown, with SSRF protection and 24h cache |
| Compound pipeline: query rewrite → search → parallel fetch → summarize → synthesize |
All tools are read-only — they fetch and transform public web content, never modify anything.
Install
pip install mcp-researchOr run directly with uvx (zero-install):
uvx mcp-researchConfiguration
All configuration is via environment variables — no config files needed.
Variable | Default | Description |
| (empty) | Brave Search API key. Falls back to DuckDuckGo if unset. |
|
| Ollama endpoint for summarization/synthesis. Set empty to disable. |
|
| Model to use for summarization and synthesis. |
|
| URL fetch cache directory. |
|
| Cache TTL in hours. |
|
| Search log directory (NDJSON). |
|
| Default max search results. |
Usage with Claude Code
Add to your Claude Code MCP config (~/.claude/settings.json or project .mcp.json):
{
"mcpServers": {
"research": {
"command": "uvx",
"args": ["mcp-research"],
"env": {
"BRAVE_API_KEY": "BSA...",
"OLLAMA_URL": "http://localhost:11434"
}
}
}
}Usage with Claude Desktop
Add to claude_desktop_config.json:
{
"mcpServers": {
"research": {
"command": "uvx",
"args": ["mcp-research"],
"env": {
"BRAVE_API_KEY": "BSA..."
}
}
}
}Tool Details
web_search
web_search(query, max_results=5, summarize=False, auto_fetch_top=False)Searches the web using a 3-tier cascade for maximum reliability:
Brave Search API — fast, high quality (requires
BRAVE_API_KEY)DuckDuckGo library — no API key needed, retries on rate limit
DuckDuckGo HTML scraper — last-resort fallback
Options:
summarize: Use Ollama to summarize results (requires running Ollama)auto_fetch_top: Also fetch and return the full content of the top result
fetch_url
fetch_url(url, summarize=False, max_chars=50000)Fetches a URL and converts it to clean markdown:
SSRF protection: Blocks localhost, private IPs, non-HTTP schemes
Smart retry: Exponential backoff on 429/5xx, per-hop redirect validation
24h cache: SHA-256 keyed, configurable TTL
Content support: HTML → markdown, JSON → code block, binary → rejected
Smart truncation: Breaks at heading/paragraph boundaries, not mid-text
research
research(query, depth="standard", context="")Compound research pipeline:
Query rewrite — Ollama optimizes your question into search keywords
Web search — finds relevant pages (with zero-result retry expansion)
Parallel fetch — fetches top N pages concurrently
Summarize — Ollama summarizes each page
Synthesize — Ollama produces a final cited answer
Depth levels:
Depth | Pages | Synthesis |
| 2 | No |
| 5 | Yes |
| 10 | Yes |
All steps gracefully degrade without Ollama — you still get search results and raw page content.
Development
git clone https://github.com/MABAAM/Maibaamcrawler.git
cd Maibaamcrawler
pip install -e .
python -m mcp_researchLicense
MIT
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.
Tools
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/MABAAM/Maibaamcrawler'
If you have feedback or need assistance with the MCP directory API, please join our Discord server