Search for:
Why this server?
Enables LLMs to perform semantic search and document management using ChromaDB, supporting natural language queries with intuitive similarity metrics for retrieval augmented generation applications.
Why this server?
A high-performance, persistent memory system for the Model Context Protocol (MCP) providing vector search capabilities and efficient knowledge storage using libSQL as the backing store.
Why this server?
Pinecone integration with vector search capabilities
Why this server?
An MCP server aimed to be portable, local, easy and convenient to support semantic/graph based retrieval of txtai "all in one" embeddings database. Any txtai embeddings db in tar.gz form can be loaded
Why this server?
Provides RAG capabilities for semantic document search using Qdrant vector database and Ollama/OpenAI embeddings, allowing users to add, search, list, and delete documentation with metadata support.
Why this server?
A Model Context Protocol (MCP) server that enables LLMs to interact directly the documents that they have on-disk through agentic RAG and hybrid search in LanceDB. Ask LLMs questions about the dataset as a whole or about specific documents.
Why this server?
A high-performance, persistent memory system for the Model Context Protocol (MCP) providing vector search capabilities and efficient knowledge storage using libSQL as the backing store.