Search for:
Why this server?
Provides structured access to markdown documentation, which aligns with the 'docs' part of the query. It supports NPM packages, Go Modules, or PyPi packages.
Why this server?
A documentation server based on MCP protocol designed for various development frameworks that provides multi-threaded document crawling, local document loading, keyword searching, and document detail retrieval, fitting the 'docs' and 'local' part of the query.
Why this server?
Enables Claude to search and access documentation from popular libraries like LangChain, LlamaIndex, and OpenAI directly within conversations; this aligns with searching for 'docs'.
Why this server?
Extracts and transforms webpage content into clean, LLM-optimized Markdown. Returns article title, main content, excerpt, byline and site name - good for making a RAG pipeline
Why this server?
A Model Context Protocol server that fetches up-to-date, version-specific documentation and code examples from libraries directly into LLM prompts
Why this server?
A Model Context Protocol (MCP) server that helps large language models index, search, and analyze code repositories with minimal setup, which is helpful to make a rag pipeline.
Why this server?
SourceSage is an MCP (Model Context Protocol) server that efficiently memorizes key aspects of a codebase—logic, style, and standards—while allowing dynamic updates and fast retrieval.
Why this server?
A simple temporary MCP server for RAGFlow that bridges the gap until an official version is released. Implies the existence of a RAG pipeline.
Why this server?
A server that provides data retrieval capabilities powered by Chroma embedding database, enabling AI models to create collections over generated data and user inputs, and retrieve that data using vector search, full text search, and metadata filtering. Good to implement local RAG.