Search for:
Why this server?
Provides RAG capabilities for semantic document search using Qdrant vector database and Ollama/OpenAI embeddings, allowing users to add, search, list, and delete documentation with metadata support. It directly addresses the user's need for a RAG system.
Why this server?
A lightweight bridge that wraps OpenAI's built-in tools (like web search and code interpreter) as Model Context Protocol servers, enabling their use with Claude and other MCP-compatible models, if OpenAI's RAG is presented as a tool.
Why this server?
A bridge enabling seamless communication between Ollama's local LLM capabilities into MCP-powered applications, allowing users to manage and run AI models locally with full API coverage which allows RAG if the models support that.
Why this server?
An open standard server implementation that enables AI Assistants to directly access APIs and services through Model Context Protocol, built using Cloudflare Workers for scalability. Could be adapted for OpenAI's API.
Why this server?
A Model Context Protocol server that enables LLMs to interact with Elasticsearch clusters, allowing them to manage indices and execute search queries using natural language. Elasticsearch can be used as a datastore for RAG.
Why this server?
A Model Context Protocol server that provides Claude and other LLMs with read-only access to Hugging Face Hub APIs, enabling interaction with models, datasets, spaces, papers, and collections through natural language. Useful if your RAG components are hosted on HF.
Why this server?
A Model Context Protocol server providing unified access to multiple search engines and AI tools, combining search, AI responses, content processing, and enhancement features through a single interface. Allows searching documents in the cloud.
Why this server?
A Model Context Protocol server providing vector database capabilities through Chroma, enabling semantic document search, metadata filtering, and document management with persistent storage, for the RAG system.