Search for:
Why this server?
This server automatically monitors Supabase database changes, generates OpenAI embeddings, and maintains synchronized vector search capabilities, which is relevant to automatically indexing and RAG.
Why this server?
Provides RAG capabilities for semantic document search using Qdrant vector database and Ollama/OpenAI embeddings, allowing users to add, search, list, and delete documentation with metadata support - directly addressing RAG.
Why this server?
This server crawls and indexes Zerops documentation, making it available as a searchable context source for Cursor IDE, effectively performing automatic indexing for RAG.
Why this server?
Analyzes codebases using Repomix and LLMs to provide structured code reviews, implicitly performing indexing for review purposes. While not explicitly RAG, it prepares code for analysis.
Why this server?
SourceSage efficiently memorizes key aspects of a codebase, allowing dynamic updates and fast retrieval, effectively indexing and retrieving relevant code information for RAG.
Why this server?
A Model Context Protocol server that enables LLMs to read, search, and analyze code files with advanced caching and real-time file watching capabilities, contributing to automatic indexing.
Why this server?
A Node.js implementation for vector search using LanceDB and Ollama's embedding model, which provides a basis for RAG with code.
Why this server?
Allows LLM tools like Claude Desktop and Cursor AI to access and summarize code files through a Model Context Protocol server, providing structured access to codebase content without manual copying - important for RAG.
Why this server?
A TypeScript Model Context Protocol (MCP) server to allow LLMs to programmatically construct mind maps to explore an idea space, with enforced automatic indexing for structured search.
Why this server?
Enables semantic search and RAG (Retrieval Augmented Generation) over your Apple Notes, allowing to use apple notes as a RAG data source.