Search for:
Why this server?
Provides RAG capabilities for semantic document search using Chroma vector database and Ollama/OpenAI embeddings, allowing users to add, search, list, and delete documentation with metadata support.
Why this server?
A Node.js implementation for vector search using LanceDB and Ollama's embedding model.
Why this server?
Scalable, high-performance knowledge graph memory system with semantic search, temporal awareness, and advanced relation management.
Why this server?
A universal Model Context Protocol implementation that serves as a semantic layer between LLMs and 3D creative software, providing a standardized interface for interacting with various Digital Content Creation tools through a unified API.
Why this server?
Enables semantic search, image search, and cross-modal search functionalities through integration with Jina AI's neural search capabilities.
Why this server?
Provides RAG capabilities for semantic document search using Qdrant vector database and Ollama/OpenAI embeddings, allowing users to add, search, list, and delete documentation with metadata support.
Why this server?
An MCP server that provides persistent memory capabilities for Claude, offering tiered memory architecture with semantic search, memory consolidation, and integration with the Claude desktop application.
Why this server?
Bridges Large Language Models with Language Server Protocol interfaces, allowing LLMs to access LSP's hover information, completions, diagnostics, and code actions for improved code suggestions.
Why this server?
Share code context with LLMs via MCP or clipboard
Why this server?
A basic implementation of persistent memory using a local knowledge graph. This lets Claude remember information about the user across chats.