Search for:
Why this server?
This server implements an MCP server for the RAG Web Browser Actor, which serves as a web browser for large language models (LLMs) and RAG pipelines.
Why this server?
A Model Context Protocol server that enables semantic search and retrieval of documentation using a vector database (Qdrant), ideal for RAG applications.
Why this server?
Provides RAG capabilities for semantic document search using Qdrant vector database and Ollama/OpenAI embeddings.
Why this server?
Enables fetching relevant content and embeddings from Supavec via the Model Context Protocol, allowing AI assistants like Claude to access vector search capabilities for RAG.
Why this server?
An open protocol server that implements Anthropic's Model Context Protocol to enable seamless integration between LLM applications and RAG data sources using Sionic AI's Storm Platform.
Why this server?
Enables semantic search and RAG (Retrieval Augmented Generation) over your Apple Notes.
Why this server?
An MCP server that enables AI models to retrieve information from Ragie's knowledge base through a simple 'retrieve' tool.
Why this server?
"primitive" RAG-like web search model context protocol server that runs locally. ✨ no APIs ✨
Why this server?
A Model Context Protocol (MCP) server that helps large language models index, search, and analyze code repositories with minimal setup
Why this server?
An MCP server implementation that provides tools for retrieving and processing documentation through vector search, enabling AI assistants to augment their responses with relevant documentation context.Uses Ollama or OpenAI to generate embeddings.