This server enables in-depth research on topics using local LLMs via Ollama, integrated with web search capabilities.
- Research Process: Perform deep research by generating search queries, gathering web information, summarizing results, and iteratively improving the summary through multiple cycles.
- Customization: Configure research parameters including number of iterations (maxLoops), LLM model choice, and search API (Tavily or Perplexity).
- Monitoring: Track status of ongoing research processes and integrate with LangSmith for detailed tracing and debugging.
- Results & Integration: Receive final research summaries in markdown format with cited sources, stored as persistent MCP resources accessible via
research://{topic}
URIs for reuse in conversations. - Deployment: Supports standard Node.js/Python installation or Docker deployment.
Based on LangChain Ollama Deep Researcher, providing workflow orchestration for multi-step research tasks
Referenced as part of research workflow implementation, though listed as requiring additional validation and re-integration
Enables research capabilities using any local LLM hosted by Ollama, supporting models like deepseek-r1 and llama3.2
Retrieves web search results using Perplexity API for research queries as part of the iterative research process
Ollama Deep Researcher DXT Extension
Overview
Ollama Deep Researcher is a Desktop Extension (DXT) that enables advanced topic research using web search and LLM synthesis, powered by a local MCP server. It supports configurable research parameters, status tracking, and resource access, and is designed for seamless integration with the DXT ecosystem.
- Research any topic using web search APIs and LLMs (Ollama, DeepSeek, etc.)
- Configure max research loops, LLM model, and search API
- Track status of ongoing research
- Access research results as resources via MCP protocol
Features
- Implements the MCP protocol over stdio for local, secure operation
- Defensive programming: error handling, timeouts, and validation
- Logging and debugging via stderr
- Compatible with DXT host environments
Directory Structure
Installation & Setup
- Clone the repository and install dependencies:
- Install Python dependencies for the assistant:
- Set required environment variables for web search APIs:
- For Tavily:
TAVILY_API_KEY
- For Perplexity:
PERPLEXITY_API_KEY
- Example:
- For Tavily:
- Build the TypeScript server (if needed):
- Run the extension locally for testing:
Usage
- Research a topic:
- Use the
research
tool with{ "topic": "Your subject" }
- Use the
- Get research status:
- Use the
get_status
tool
- Use the
- Configure research parameters:
- Use the
configure
tool with any of:maxLoops
,llmModel
,searchApi
- Use the
Manifest
See manifest.json
for the full DXT manifest, including tool schemas and resource templates. Follows DXT MANIFEST.md.
Logging & Debugging
- All server logs and errors are output to
stderr
for debugging. - Research subprocesses are killed after 5 minutes to prevent hangs.
- Invalid requests and configuration errors return clear, structured error messages.
Security & Best Practices
- All tool schemas are validated before execution.
- API keys are required for web search APIs and are never logged.
- MCP protocol is used over stdio for local, secure communication.
Testing & Validation
- Validate the extension by loading it in a DXT-compatible host.
- Ensure all tool calls return valid, structured JSON responses.
- Check that the manifest loads and the extension registers as a DXT.
Troubleshooting
- Missing API key: Ensure
TAVILY_API_KEY
orPERPLEXITY_API_KEY
is set in your environment. - Python errors: Check Python dependencies and logs in
stderr
. - Timeouts: Research subprocesses are limited to 5 minutes.
References
© 2025 Your Name or Organization. Licensed under MIT.
hybrid server
The server is able to function both locally and remotely, depending on the configuration or use case.
This is a Model Context Protocol (MCP) server adaptation of LangChain Ollama Deep Researcher. It provides the deep research capabilities as MCP tools that can be used within the model context protocol ecosystem, allowing AI assistants to perform in-depth research on topics (locally) via Ollama
- Core Functionality
- Prerequisites
- Installation
- Client Configuration
- Tracing and Monitoring
- MCP Resources
- Available Tools
- Prompting
- The Ollama Research Workflow
- Example Prompt and Output Transcript
- Claude Final Output
Related Resources
Related MCP Servers
- -securityFlicense-qualityAn interactive chat interface that combines Ollama's LLM capabilities with PostgreSQL database access through the Model Context Protocol (MCP). Ask questions about your data in natural language and get AI-powered responses backed by real SQL queries.Last updated -52TypeScript
- AsecurityAlicenseAqualityMCP Ollama server integrates Ollama models with MCP clients, allowing users to list models, get detailed information, and interact with them through questions.Last updated -325PythonMIT License
- -securityFlicense-qualityA generic Model Context Protocol framework for building AI-powered applications that provides standardized ways to create MCP servers and clients for integrating LLMs with support for Ollama and Supabase.Last updated -TypeScript
- AsecurityAlicenseAqualityAn MCP server that queries multiple Ollama models and combines their responses, providing diverse AI perspectives on a single question for more comprehensive answers.Last updated -260TypeScriptMIT License