Search for:
Why this server?
Enables LLMs to search, retrieve, and manage documents through Rememberizer's knowledge management API, allowing the AI to process a large context.
Why this server?
Enables integration with Google Drive for listing, reading, and searching over files, supporting various file types, making it suitable for accessing a large context from a folder.
Why this server?
Provides RAG capabilities for semantic document search using Qdrant vector database and Ollama/OpenAI embeddings, allowing users to add, search, list, and delete documentation with metadata support, useful for managing and seeking through a large context.
Why this server?
Provides access to organizational Sharepoint documents through the Microsoft Graph API, enabling search and retrieval of Sharepoint content for AI assistants.
Why this server?
A comprehensive memory management system for Cursor IDE that allows AI assistants to remember, recall, and manage information across conversations, useful for writing large documents.
Why this server?
A Model Context Protocol server providing vector database capabilities through Chroma, enabling semantic document search, metadata filtering, and document management with persistent storage, allowing for efficient retrieval and understanding of large context.
Why this server?
Allows LLM tools like Claude Desktop and Cursor AI to access and summarize code files through a Model Context Protocol server, providing structured access to codebase content without manual copying, which can be useful for providing context for writing large documents.
Why this server?
A Model Context Protocol server that enables LLMs to read, search, and analyze code files with advanced caching and real-time file watching capabilities.
Why this server?
A Model Context Protocol server that enables LLMs to fetch and process web content in multiple formats (HTML, JSON, Markdown, text) with automatic format detection, which can be helpful when gathering context from online sources.