Search for:
Why this server?
This tool crawls websites, generates Markdown documentation, and makes that documentation searchable via a Model Context Protocol (MCP) server, which could be used to RAG your project documentation.
Why this server?
Ingests content from various sources including websites and documents, converts them to Markdown, and makes them searchable via MCP, suitable for RAG.
Why this server?
Transforms static book collections into interactive knowledge repositories, enabling RAG on book contents. This could be useful if the project file is a book.
Why this server?
An embeddings database for semantic search, LLM orchestration and language model workflows. All functionality can be served via it's API and the API supports MCP, enabling RAG.
Why this server?
Obsidian vault connector for Claude Desktop - enables reading and writing Markdown notes using Model Context Protocol (MCP) which could enable RAG.
Why this server?
A Model Context Protocol server that provides AI assistants with structured access to your Logseq knowledge graph, enabling retrieval, searching, analysis, and creation of content within your personal knowledge base suitable for RAG.
Why this server?
A server that implements Retrieval-Augmented Generation using GroundX and OpenAI, enabling semantic search and document retrieval with Modern Context Processing for enhanced context handling.
Why this server?
Enables AI models to perform file system operations (reading, creating, and listing files) on a local file system through a standardized Model Context Protocol interface, this can be used to access the files for RAG.
Why this server?
A Model Context Protocol server that provides read-only access to PostgreSQL databases, enabling LLMs to inspect database schemas and execute read-only queries, providing a way to access project data for RAG.
Why this server?
Connects Bear Notes to AI assistants using semantic search and RAG (Retrieval-Augmented Generation), allowing AI systems to access and understand your personal knowledge base.