Search for:
Why this server?
Integrates LLMs with RAG data sources using Sionic AI's Storm Platform, essential for connecting to various document repositories.
Why this server?
Facilitates integration of PrivateGPT, enabling chat functionalities and secure management of knowledge sources for local RAG.
Why this server?
Enables semantic search and RAG over Apple Notes, allowing AI assistants to reference your notes during conversations.
Why this server?
Provides RAG capabilities for semantic document search using Qdrant vector database and Ollama/OpenAI embeddings.
Why this server?
Provides an API to query Large Language Models using context from local files, supporting various models and file types.
Why this server?
Provides semantic memory and persistent storage for Claude, leveraging ChromaDB and sentence transformers.
Why this server?
Vectorize MCP server for advanced retrieval, Private Deep Research, Anything-to-Markdown file extraction and text chunking.
Why this server?
A Python-based MCP server that crawls websites to extract and save content as markdown files, with features for mapping website structure and links.
Why this server?
A tool for Model Context Protocol (MCP) that allows you to analyze web content and add it to your knowledge base, storing content as Markdown files for easy viewing with tools like Obsidian.
Why this server?
Enables LLMs to interact directly with the documents that they have on-disk through agentic RAG and hybrid search in LanceDB.