This server enables local, privacy-focused web search and research using RAG (Retrieval-Augmented Generation) techniques without requiring API keys, providing LLMs with current web context.
Core Capabilities:
Multi-engine web search: Query across 9+ search backends including DuckDuckGo, Google, Bing, Brave, Wikipedia, Yahoo, Yandex, Mojeek, and Grokipedia
Semantic similarity ranking: Uses Google's MediaPipe Text Embedder to rank search results by relevance to your query
Deep research tools: Perform comprehensive investigations with
deep_research(multi-engine),deep_research_google(Google-focused), anddeep_research_ddgs(privacy-first)Quick searches: Use
rag_search_ddgsandrag_search_googlefor fast, focused single queries when immediate answers are neededContent extraction: Fetches and converts web content from top-ranked URLs into markdown format for LLM consumption
Privacy-first operation: Runs entirely locally with no external API keys required
Customizable parameters: Adjust
num_results,top_k, and backend selection to control search scope and depthAgent Skills integration: Teaches Claude how to effectively use tools for intelligent query formulation, privacy-aware searching, and multi-perspective analysis
Fresh information access: Enables LLMs to retrieve current web information beyond their training data, including recent news and updates
Broad MCP client support: Works with Claude Desktop, Cursor, Goose, and other MCP clients supporting tool calling
Enables web search functionality using Google to retrieve up-to-date information that can be incorporated into Claude's responses
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@mcp-local-ragsearch for the latest developments in quantum computing"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
mcp-local-rag
"primitive" RAG-like web search model context protocol (MCP) server that runs locally. ✨ no APIs ✨
A RAG-based web search and deep research model context protocol (MCP) server that runs entirely locally. Features multi-engine research across 9+ search backends with semantic similarity ranking, and requires no API keys.
Features
Multi-Engine Deep Research
The server supports comprehensive multi-engine research capabilities that go beyond simple single-query searches:
9+ Search Backends: DuckDuckGo, Google, Bing, Brave, Wikipedia, Yahoo, Yandex, Mojeek, Grokipedia
Multi-Topic Research: Search multiple related queries simultaneously
Semantic Ranking: RAG-like similarity scoring ranks the most relevant results
Privacy Options: Choose privacy-focused engines (DuckDuckGo, Brave) or comprehensive ones (Google)
No API Keys Required: All processing runs locally with embedded models
Deep Research Tools
deep_research- Comprehensive multi-engine researchSearch across multiple engines simultaneously
Ideal for complex topics requiring diverse perspectives
Customizable backends and result limits
deep_research_google- Google-focused deep diveLeverage Google's comprehensive index
Best for technical/scientific queries
deep_research_ddgs- Privacy-first deep researchUse DuckDuckGo for private, extensive research
Great for general topics without tracking
rag_search_ddgs&rag_search_google- Quick single searchesFast, focused searches when you need quick answers
Installation
Locate your MCP config path here or check your MCP client settings.
Run Directly via uvx
This is the easiest and quickest method. You need to install uv for this to work. Add this to your MCP server configuration:
Using Docker (recommended)
Ensure you have Docker installed. Add this to your MCP server configuration:
Agent Skills
This repository includes Agent Skills that teach Claude how to effectively use the mcp-local-rag tools for intelligent web searches and deep research. Skills are folders of instructions that Claude loads dynamically to improve performance on specialized tasks.
Available Skills
local-rag-search - Teaches Claude best practices for:
Smart tool selection: Choosing between quick searches or comprehensive deep research
Multi-engine research: Using multiple search backends for diverse perspectives
Effective query formulation: Writing natural language queries that yield better results
Parameter tuning: Adjusting
num_results,top_k, and backend selection for different use casesPrivacy-aware searching: Defaulting to privacy-focused engines while allowing comprehensive searches when needed
Deep Research Use Cases
The skill enables comprehensive topic research using multiple search terms and engines. It's particularly useful for technical deep dives that leverage Google's documentation coverage, multi-perspective analysis that compares information across different search engines, privacy-focused research using DuckDuckGo or Brave, and factual verification by cross-referencing Wikipedia and other authoritative sources.
Using the Skills
In Claude Desktop:
Go to Settings → Skills
Click Add Skill → Add from folder
Select
skills/local-rag-search/
In conversations: Once loaded, simply ask Claude to search for information and it will automatically apply the skill's best practices. Try queries like:
"Do deep research on recent quantum computing developments"
"Search multiple sources for sustainable energy solutions"
"Find comprehensive technical documentation about Kubernetes optimization"
Learn more about Agent Skills at the Anthropic Skills Repository.
See the skills/README.md for detailed usage instructions and skill development guidelines.
Security audits
MseeP does security audits on every MCP server, you can see the security audit of this MCP server by clicking here.
MCP Clients
The MCP server should work with any MCP client that supports tool calling. Has been tested on the below clients.
Claude Desktop
Cursor
Goose
Others? You try!
Examples on Claude Desktop
When an LLM (like Claude) is asked a question requiring recent web information, it will trigger mcp-local-rag.
When asked to fetch/lookup/search the web, the model prompts you to use MCP server for the chat.
In the example, have asked it about Google's latest Gemma models released yesterday. This is new info that Claude is not aware about.
Related MCP server: OneSearch MCP Server
Result
mcp-local-rag performs a live web search, extracts context, and sends it back to the model—giving it fresh knowledge:
Buy Me A Coffee
If the software I've built has been helpful to you. Please do buy me a coffee, would really appreciate it! 😄
Contributing
Have ideas or want to improve this project? Issues and pull requests are welcome!
License
This project is licensed under the MIT License.