Indexes and retrieves official FastAPI documentation for semantic search and AI context injection.
Indexes and retrieves official Next.js documentation, providing up-to-date context on features like the App Router.
Auto-discovers documentation URLs for project dependencies by querying the NPM registry based on package.json files.
Auto-discovers documentation URLs for project dependencies by querying the PyPI registry based on requirements.txt files.
Crawls, indexes, and performs semantic search over official React documentation to provide AI agents with current API usage and context.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@OmniDocs MCPsearch for the latest useActionState syntax in React 19"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
OmniDocs MCP
OmniDocs MCP is an intelligent Model Context Protocol (MCP) server that empowers AI agents to instantly search, index, summarize, and inject live framework documentation directly into their context window.
Stop hallucinating code for outdated framework versions. Let OmniDocs fetch the exact documentation your AI needs, the moment it needs it.
[AI Agent] Calling get_library_docs("react", "useActionState usage")
[OmniDocs] 🔍 Library 'react' not indexed. Crawling react.dev...
[OmniDocs] 🧠 Chunking 150 pages and computing embeddings (ONNX)...
[OmniDocs] ⚡ Returning top 5 semantic chunks (Dense + BM25)
[AI Agent] Receives 1,500 highly-relevant tokens. Writes perfect code.🤔 Why OmniDocs? (The Comparison)
Approach | The Problem | The OmniDocs Solution |
Context Stuffing (Full URLs) | Destroys token limits (50k+ tokens/page); high latency; high API costs. | Semantically retrieves only the relevant 512-token chunks to save context. |
Web Search Tools (Tavily/Exa) | Returns SEO fluff, outdated blog posts, and stack overflow threads. | Exclusively targets official, canonical framework documentation. |
Cloud RAG / Vector APIs | Requires expensive API subscriptions and sends queries to 3rd parties. | 100% Local embedding (ONNX + ChromaDB). Zero API keys, completely free. |
LLM Internal Knowledge | Hallucinates deprecated APIs (e.g., React 17 vs 19, or Next.js App Router). | Guarantees up-to-date syntax directly from the live documentation. |
✨ Core Features
Deep HTML Crawling: Employs an Indexer & Sniper architecture to map deep documentation sites via XML Sitemaps or pure HTML-crawling, returning dense Tables of Contents for agents to navigate.
Local RAG & Semantic Search: Embeds documentation locally using ONNX (via
fastembed) and chunks it semantically. Exposes a natural language query interface so agents receive precise, high-density excerpts instead of full massive pages.Local Manifest Auto-Discovery: Point OmniDocs at any
package.jsonorrequirements.txt. It will seamlessly communicate with the NPM/PyPI registries to auto-discover library documentation URLs and register them in its tracking file.Persistent Disk Caching: Prevents excessive redundant scraping and LLM token usage by storing fetched markdown via
diskcache, allowing the user to configure granular TTLs (Time-To-Live).
🏗 Architecture & How it Works
OmniDocs operates as a middleware server between an AI Agent and official documentation websites. Instead of the AI browsing the web blindly, it uses OmniDocs to precisely retrieve, parse, chunk, embed, and cache documentation locally.
Core Modules
Server CLI (
server.py): The main entry point. Exposesget_library_docswhich agents use to ask natural language questions.Fetcher (
fetcher.py): Handles outbound HTTP requests, crawling Sitemaps and pure HTML. Uses BeautifulSoup to strip away navbars and footers, andmarkdownifyto convert perfectly to Markdown.Chunker (
chunker.py): Splits massive Markdown pages into smaller, semantically coherent 512-token chunks, keeping Markdown headers intact so the context isn't lost.Vector Store (
vector_store.py): Embeds chunks locally using thefastembedONNX model and stores them persistently in ChromaDB. Uses a hybrid retrieval method (Dense Vector Search + BM25 keyword re-ranking) for maximum accuracy on exact API names.Cache Layer (
cache.py): Usesdiskcacheto store the raw downloaded Markdown on the local hard drive to prevent redundant network requests.Auto-Discovery (
discovery.py): Parses localpackage.jsonorrequirements.txtfiles to auto-register libraries.
🔄 Retrieval Workflow
When an AI encounters a library it doesn't know, it just issues a natural language query, and the following flow occurs:
sequenceDiagram
participant AI as AI Agent
participant MCP as OmniDocs
participant VectorDB as ChromaDB
participant Web as fetcher.py
AI->>MCP: Call `get_library_docs("react", "useActionState usage")`
MCP->>VectorDB: Check if 'react' is indexed
alt Not Indexed
MCP->>Web: Crawl entire doc site & convert to Markdown
Web-->>MCP: Return Markdown pages
MCP->>MCP: Chunk pages & compute local embeddings
MCP->>VectorDB: Store chunks & vectors
end
MCP->>VectorDB: Perform hybrid search (Dense + BM25) for query
VectorDB-->>MCP: Top 5 semantic chunks
MCP-->>AI: Return pure, precise Markdown context🚀 Quick Start
Prerequisites
Python 3.10+ (Required for FastMCP and ChromaDB)
OS: Windows, macOS, or Linux
Hardware: Runs entirely on CPU. No GPU required (FastEmbed uses lightweight ONNX models).
Clone & Install
git clone https://github.com/your-username/omnidocs-mcp.git cd omnidocs-mcp pip install -r requirements.txtSeed your Libraries OmniDocs stores your tracked libraries in
libraries.yaml. You can auto-fill this by running the server toolauto_import_from_manifestagainst your project, or manually insert overrides to customize tracking:libraries: react: docs_url: https://react.dev/learn ttl_hours: 24 fastapi: docs_url: https://fastapi.tiangolo.com ttl_hours: 48 tailwindcss: docs_url: https://tailwindcss.com/docs ttl_hours: 72
🔌 Connecting to AI Agents (Antigravity, Roo/Cline, Claude Desktop)
To connect OmniDocs to your MCP-compatible client, add this configuration block to your client's MCP settings file (e.g., %APPDATA%\Code\User\globalStorage\rooveterinaryinc.roo-cline\settings\cline_mcp_settings.json):
{
"mcpServers": {
"omnidocs-mcp": {
"command": "C:/absolute/path/to/omnidocs-mcp/venv/Scripts/python.exe",
"args": [
"C:/absolute/path/to/omnidocs-mcp/server.py"
],
"env": {}
}
}
}Note: Ensure you point the command variable to the Python executable from inside the virtual environment where you installed the requirements.txt dependencies.
🛠 Available Tools
Once connected, your AI gains the following native tools:
get_library_docs(library, query): The primary tool. Performs a semantic vector search and BM25 hybrid ranking over the library's documentation to answer specific questions, automatically crawling if not yet indexed.get_changelog(library): Fetches recent release notes so the AI knows about breaking changes.auto_import_from_manifest(manifest_path): Analyzes yourpackage.jsonto self-populate the OmniDocs library registry.list_tracked_libraries(): Shows what the server is currently tracking.refresh_all_docs(): Hard-busts the cache and pulls live web data.
This server cannot be installed
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.