mcp-server-ollama-deep-researcher
hybrid server
The server is able to function both locally and remotely, depending on the configuration or use case.
MCP Server: Ollama Deep Researcher
This is a Model Context Protocol (MCP) server adaptation of LangChain Ollama Deep Researcher. It provides the deep research capabilities as MCP tools that can be used within the model context protocol ecosystem, allowing AI assistants to perform in-depth research on topics using local LLMs via Ollama.
Core Functionality
The server provides research capabilities through MCP tools and resources, using any LLM hosted by Ollama.
Research Process
Given a topic, it will:
- Generate a web search query
- Gather web search results via Tavily or Perplexity API
- Summarize the search results
- Reflect on the summary to examine knowledge gaps
- Generate new search queries to address the gaps
- Iteratively improve the summary through multiple research cycles
- Provide a final markdown summary with all sources used
Prerequisites
- Node.js (for running the MCP server)
- Download and install from https://nodejs.org/
- Ensure Node.js is added to your system PATH
- Python 3.10 or higher
- Compute (CPU/GPU) capable of running your selected Ollama model
- At least 8GB of RAM for running larger language models
- Required API keys:
- Tavily API key (get one at https://tavily.com)
- Perplexity API key (get one at https://perplexity.ai)
- LangSmith API key (get one at https://smith.langchain.com) for tracing and monitoring
Make sure you can run Node.js and npm from your terminal/command prompt. You can verify your installations with:
If these commands fail, you may need to:
- Restart your terminal/computer after installation
- Add Node.js to your system PATH:
- Windows: Edit system environment variables → Environment Variables → Path → Add Node.js installation directory
- macOS/Linux: Usually handled by the installer
Installation
Option 1: Standard Installation
- Download and install Ollama for your platform
- Clone this repository and install dependencies:
- Install Python dependencies:
First, install uv (recommended for better performance and dependency resolution):
Then install project dependencies using pyproject.toml:
Note: This will install the project in editable mode with all dependencies specified in pyproject.toml. If you prefer pip:
- Build the TypeScript code:
- Pull a local LLM from Ollama:
Option 2: Docker Installation
You can also run the MCP server using Docker, which simplifies the setup process.
- Download and install Docker for your platform
- Clone this repository:
- Create a
.env
file with your API keys (you can copy from.env.example
):
- Make the helper script executable:
- Build and run the Docker container:
- Ensure Ollama is running on your host machine:
The helper scripts provide several commands:
For macOS/Linux (using run-docker.sh):
./run-docker.sh start
- Build and start the Docker container./run-docker.sh stop
- Stop the Docker container./run-docker.sh restart
- Restart the Docker container./run-docker.sh logs
- Show logs from the Docker container./run-docker.sh status
- Check the status of the Docker container./run-docker.sh help
- Show help message
For Windows (using run-docker.bat):
run-docker.bat start
- Build and start the Docker containerrun-docker.bat stop
- Stop the Docker containerrun-docker.bat restart
- Restart the Docker containerrun-docker.bat logs
- Show logs from the Docker containerrun-docker.bat status
- Check the status of the Docker containerrun-docker.bat help
- Show help message
Note: The Docker container is configured to connect to Ollama running on your host machine. If you want to run Ollama in a container as well, uncomment the Ollama service in the docker-compose.yml file.
Client Configuration
Add the server to your MCP client configuration:
For Claude Desktop App:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
- Windows:
%APPDATA%\Claude\claude_desktop_config.json
For Cline (VS Code Extension):
- Windows:
%APPDATA%\Code\User\globalStorage\saoudrizwan.claude-dev\settings\cline_mcp_settings.json
- macOS:
~/Library/Application Support/Code/User/globalStorage/saoudrizwan.claude-dev/settings/cline_mcp_settings.json
- Linux:
~/.config/Code/User/globalStorage/saoudrizwan.claude-dev/settings/cline_mcp_settings.json
Option 1: Standard Installation Configuration
Note: Replace paths with absolute paths for your system:
- Windows: Use
C:\\Users\\username\\path\\to\\mcp-server-ollama-deep-researcher
- macOS/Linux: Use
/Users/username/path/to/mcp-server-ollama-deep-researcher
For macOS/Linux, you may also want to add:
Option 2: Docker Installation Configuration
If you're using the Docker container, you can configure the MCP client to connect to the running container:
This configuration assumes the Docker container is running. The environment variables are already set in the Docker container, so you don't need to specify them in the MCP client configuration.
Tracing and Monitoring
The server integrates with LangSmith for comprehensive tracing and monitoring of the research process:
- Operation Tracing:
- All LLM interactions are traced
- Web search operations are monitored
- Research workflow steps are tracked
- Performance Monitoring:
- Response times for each operation
- Success/failure rates
- Resource utilization
- Debugging and Optimization:
- Detailed traces for troubleshooting
- Performance bottleneck identification
- Query optimization insights
Access all traces at https://smith.langchain.com under your configured project name.
MCP Resources
Research results are automatically stored as MCP resources, enabling:
- Persistent Access
- Results accessible via
research://{topic}
URIs - Automatic storage of completed research
- JSON-formatted content with metadata
- Results accessible via
- Resource Panel Integration
- Research appears in MCP client's resource panel
- Easy access to past research topics
- Timestamp and description for each result
- Context Management
- Efficient reuse of research in conversations
- Reduced token usage through resource references
- Selective inclusion of research context
Available Tools
Configure
- maxLoops: Number of research iterations (1-5)
- llmModel: Ollama model to use (e.g., "deepseek-r1:1.5b", "llama3.2")
- searchApi: Search API to use ("perplexity" or "tavily")
Configure research parameters.
Research
Research any topic using web search and LLM synthesis.
Get status
Get the current status of ongoing research.
Prompting
Using the Default Search API, Model, and Max Iterations (loops)
Prompt Example: "research AI-First Applications"
Change Default Config and Start Research
Synatx: configure with <searchapi> and <model> then research <topic>
Prompt Example: "Configure with perplexity and deepseek-r1:8b then research AI-First Applications"
The Ollama Research Workflow
The research process is inspired by IterDRAG. This approach decomposes a query into sub-queries, retrieves documents for each one, answers the sub-query, and then builds on the answer by retrieving docs for the second sub-query.
The process works as follows:
- Given a user-provided topic, use a local LLM (via Ollama) to generate a web search query
- Uses a search engine (configured for Tavily) to find relevant sources
- Uses LLM to summarize the findings from web search related to the user-provided research topic
- Then, it uses the LLM to reflect on the summary, identifying knowledge gaps
- It generates a new search query to address the knowledge gaps
- The process repeats, with the summary being iteratively updated with new information from web search
- It will repeat down the research rabbit hole
- Runs for a configurable number of iterations
Outputs
The output is a markdown file containing the research summary, with citations to all sources used during the research process.
All sources gathered during research are preserved and can be referenced in the final output:
System Integration Overview
Troubleshooting
Here are solutions to common issues you might encounter:
Ollama Connection Issues
- Make sure Ollama is running: Execute
ollama list
in your terminal - Try running ollama in terminal mode by closing the app (System Tray/Menu Bar), and executing
ollama serve
- Check if Ollama is accessible at
localhost:11434
,0.0.0.0:11434
, or127.0.0.1:11434
API Key Issues
- Verify your API key is correctly set in the configuration file
- Verify your path arg points to the actual location of the index.js in this repo
- Ensure there are no extra spaces or quotes around the API key
- Check if your API key has sufficient credits/permissions
MCP Server Issues
- Use the MCP Inspector for debugging:
Docker Issues
- If you're having issues with the Docker container:
- Check if the container is running:
docker ps
- View container logs:
docker logs ollama-deep-researcher-mcp
- Ensure your
.env
file contains valid API keys - Verify Ollama is running on your host machine and accessible from the container
- If using host.docker.internal doesn't work, try using your host machine's IP address in the OLLAMA_BASE_URL environment variable
- For network issues between containers, ensure they're on the same Docker network
- Check if the container is running:
- If you're running Ollama in a container:
- Uncomment the Ollama service in docker-compose.yml
- Ensure the Ollama container has enough resources allocated
- Pull the model in the Ollama container:
docker exec -it ollama ollama pull deepseek-r1:8b
Build Issues
- If
npm run build
fails with "'node' is not recognized":- Ensure Node.js is properly installed
- Add Node.js to your system PATH:
- Windows: Edit system environment variables → Environment Variables → Path → Add Node.js installation directory
- macOS/Linux: Usually handled by the installer
- Restart your terminal/computer
- Try running
node --version
to verify the installation
Python Issues
Windows:
- Ensure Python is in your PATH
- Try using
python
instead ofpython3
- Check if pip is installed:
python -m pip --version
macOS/Linux:
- Use
python3
instead ofpython
- Check if pip is installed:
python3 -m pip --version
- You may need to install pip:
sudo apt install python3-pip
(Ubuntu/Debian) orbrew install python3
(macOS)
Error Handling
The server provides clear error messages for:
- Missing or invalid API keys
- Configuration issues
- Search API problems
- LLM processing errors
Enhancements Needed
- Tighter re-integration and validation of langgraph for additional interesting use cases.
Architecture
For detailed information about the server's architecture and implementation, see .context/index.md.
Glama.ai Badge
<a href="https://glama.ai/mcp/servers/r25SSxqOci"> <img width="380" height="200" src="https://glama.ai/mcp/servers/r25SSxqOci/badge" /> </a>Example Prompt and Output Transcript
Prompt
Configuration Output
Ollama Researcher Output
Claude Final Output
This server cannot be installed
This is a Model Context Protocol (MCP) server adaptation of LangChain Ollama Deep Researcher. It provides the deep research capabilities as MCP tools that can be used within the model context protocol ecosystem, allowing AI assistants to perform in-depth research on topics (locally) via Ollama