Search for:
Why this server?
This server provides RAG capabilities for semantic document search using Qdrant vector database and Ollama/OpenAI embeddings.
Why this server?
This server enables AI agents to perform Retrieval-Augmented Generation by querying a FAISS vector database containing Sui Move language documents.
Why this server?
This server provides a Model Context Protocol server providing vector database capabilities through Chroma, enabling semantic document search, metadata filtering, and document management with persistent storage.
Why this server?
This server provides data retrieval capabilities powered by Chroma embedding database, enabling AI models to create collections over generated data and user inputs, and retrieve that data using vector search, full text search, and metadata filtering.
Why this server?
This is an example of how to create a MCP server for Qdrant, a vector search engine.
Why this server?
A Model Context Protocol (MCP) server providing semantic search and memory mining server based on PubTator3, providing convenient access through the MCP interface.
Why this server?
This server enables semantic search and RAG (Retrieval Augmented Generation) over your Apple Notes.
Why this server?
An MCP server that provides tools to retrieve and process documentation through vector search, enabling AI assistants to augment their responses with relevant documentation context
Why this server?
A Node.js implementation for vector search using LanceDB and Ollama's embedding model.
Why this server?
Enables semantic search across multiple Qdrant vector database collections, supporting multi-query capability and providing semantically relevant document retrieval with configurable result counts.