• Search
  • Knowledge & Memory
  • Developer Tools
TypeScript
MIT
221
6
A
security – no known vulnerabilities (report Issue)
A
license - permissive license (MIT)
A
quality - confirmed to work

An MCP server implementation that provides tools for retrieving and processing documentation through vector search, enabling AI assistants to augment their responses with relevant documentation context

  1. Tools
  2. Prompts
  3. Resources
  4. Server Configuration
  5. README.md

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Tools

Functions exposed to the LLM to take actions

NameDescription
search_documentationSearch through stored documentation using natural language queries. Use this tool to find relevant information across all stored documentation sources. Returns matching excerpts with context, ranked by relevance. Useful for finding specific information, code examples, or related documentation.
list_sourcesList all documentation sources currently stored in the system. Returns a comprehensive list of all indexed documentation including source URLs, titles, and last update times. Use this to understand what documentation is available for searching or to verify if specific sources have been indexed.
extract_urlsExtract and analyze all URLs from a given web page. This tool crawls the specified webpage, identifies all hyperlinks, and optionally adds them to the processing queue. Useful for discovering related documentation pages, API references, or building a documentation graph. Handles various URL formats and validates links before extraction.
remove_documentationRemove specific documentation sources from the system by their URLs. Use this tool to clean up outdated documentation, remove incorrect sources, or manage the documentation collection. The removal is permanent and will affect future search results. Supports removing multiple URLs in a single operation.
list_queueList all URLs currently waiting in the documentation processing queue. Shows pending documentation sources that will be processed when run_queue is called. Use this to monitor queue status, verify URLs were added correctly, or check processing backlog. Returns URLs in the order they will be processed.
run_queueProcess and index all URLs currently in the documentation queue. Each URL is processed sequentially, with proper error handling and retry logic. Progress updates are provided as processing occurs. Use this after adding new URLs to ensure all documentation is indexed and searchable. Long-running operations will process until the queue is empty or an unrecoverable error occurs.
clear_queueRemove all pending URLs from the documentation processing queue. Use this to reset the queue when you want to start fresh, remove unwanted URLs, or cancel pending processing. This operation is immediate and permanent - URLs will need to be re-added if you want to process them later. Returns the number of URLs that were cleared from the queue.

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
QDRANT_URLYesURL of your Qdrant vector database instance
OPENAI_API_KEYYesYour OpenAI API key for embeddings generation
QDRANT_API_KEYYesAPI key for authenticating with Qdrant
README.md

RAG Documentation MCP Server

An MCP server implementation that provides tools for retrieving and processing documentation through vector search, enabling AI assistants to augment their responses with relevant documentation context.

Features

  • Vector-based documentation search and retrieval
  • Support for multiple documentation sources
  • Semantic search capabilities
  • Automated documentation processing
  • Real-time context augmentation for LLMs

Tools

search_documentation

Search through stored documentation using natural language queries. Returns matching excerpts with context, ranked by relevance.

Inputs:

  • query (string): The text to search for in the documentation. Can be a natural language query, specific terms, or code snippets.
  • limit (number, optional): Maximum number of results to return (1-20, default: 5). Higher limits provide more comprehensive results but may take longer to process.

list_sources

List all documentation sources currently stored in the system. Returns a comprehensive list of all indexed documentation including source URLs, titles, and last update times. Use this to understand what documentation is available for searching or to verify if specific sources have been indexed.

extract_urls

Extract and analyze all URLs from a given web page. This tool crawls the specified webpage, identifies all hyperlinks, and optionally adds them to the processing queue.

Inputs:

  • url (string): The complete URL of the webpage to analyze (must include protocol, e.g., https://). The page must be publicly accessible.
  • add_to_queue (boolean, optional): If true, automatically add extracted URLs to the processing queue for later indexing. Use with caution on large sites to avoid excessive queuing.

remove_documentation

Remove specific documentation sources from the system by their URLs. The removal is permanent and will affect future search results.

Inputs:

  • urls (string[]): Array of URLs to remove from the database. Each URL must exactly match the URL used when the documentation was added.

list_queue

List all URLs currently waiting in the documentation processing queue. Shows pending documentation sources that will be processed when run_queue is called. Use this to monitor queue status, verify URLs were added correctly, or check processing backlog.

run_queue

Process and index all URLs currently in the documentation queue. Each URL is processed sequentially, with proper error handling and retry logic. Progress updates are provided as processing occurs. Long-running operations will process until the queue is empty or an unrecoverable error occurs.

clear_queue

Remove all pending URLs from the documentation processing queue. Use this to reset the queue when you want to start fresh, remove unwanted URLs, or cancel pending processing. This operation is immediate and permanent - URLs will need to be re-added if you want to process them later.

Usage

The RAG Documentation tool is designed for:

  • Enhancing AI responses with relevant documentation
  • Building documentation-aware AI assistants
  • Creating context-aware tooling for developers
  • Implementing semantic documentation search
  • Augmenting existing knowledge bases

Configuration

Usage with Claude Desktop

Add this to your claude_desktop_config.json:

{ "mcpServers": { "rag-docs": { "command": "npx", "args": [ "-y", "@hannesrudolph/mcp-ragdocs" ], "env": { "OPENAI_API_KEY": "", "QDRANT_URL": "", "QDRANT_API_KEY": "" } } } }

You'll need to provide values for the following environment variables:

  • OPENAI_API_KEY: Your OpenAI API key for embeddings generation
  • QDRANT_URL: URL of your Qdrant vector database instance
  • QDRANT_API_KEY: API key for authenticating with Qdrant

License

This MCP server is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License. For more details, please see the LICENSE file in the project repository.

Acknowledgments

This project is a fork of qpd-v/mcp-ragdocs, originally developed by qpd-v. The original project provided the foundation for this implementation.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues with dependencies of the server.
  • Extract server characteristics such as tools, resources, prompts, and required parameters.

Our directory badge helps users to quickly asses that the MCP server is safe, server capabilities, and instructions for installing the server.

Copy the following code to your README.md file:

Alternative MCP servers

  • A
    security
    F
    license
    A
    quality
    Helps AI read GitHub repository structure and important files. Want to quickly understand what a repo is about? Prompt it with "read https://github.com/adhikasp/mcp-git-ingest and determine how the code technically works".
  • -
    security
    F
    license
    -
    quality
    Provides intelligent summarization capabilities through a clean, extensible architecture. Mainly built for solving AI agents issues on big repositories, where large files can eat up the context window.
  • A
    security
    A
    license
    A
    quality
    A Model Context Protocol server for document format conversion using pandoc. This server provides tools to transform content between different document formats while preserving formatting and structure.
    MIT
    • Apple
  • A
    security
    A
    license
    A
    quality
    Implementation of an MCP server for the RAG Web Browser Actor. This Actor serves as a web browser for large language models (LLMs) and RAG pipelines, similar to a web search in ChatGPT.
    Apache-2.0
    • Apple