Search for:
Why this server?
An improved implementation of persistent memory using a local knowledge graph with a customizable --memory-path. This lets Claude remember information about the user across chats.
Why this server?
Provides persistent memory integration for chat applications by utilizing a local knowledge graph to remember user information across interactions.
Why this server?
Enhances user interaction through a persistent memory system that remembers information across chats and learns from past errors by utilizing a local knowledge graph and lesson management.
Why this server?
An improved implementation of persistent memory using a local knowledge graph to remember user information across chats.
Why this server?
Facilitates knowledge graph representation with semantic search using Qdrant, supporting OpenAI embeddings for semantic similarity and robust HTTPS integration with file-based graph persistence.
Why this server?
A high-performance MCP server utilizing libSQL for persistent memory and vector search capabilities, enabling efficient entity management and semantic knowledge storage.
Why this server?
This advanced memory server facilitates neural memory-based sequence learning and prediction, enhancing code generation and understanding through state maintenance and manifold optimization as inspired by Google Research's framework.
Why this server?
Enables neural memory sequence learning with a memory-augmented model for improved code understanding and generation, featuring state management, novelty detection, and model persistence.
Why this server?
An MCP server that reviews UI edit requests by comparing before and after screenshots, providing visual feedback on whether changes satisfy the user's requirements.
Why this server?
A server implementation that allows AI assistants to read, create, and manipulate notes in Obsidian vaults through the Model Context Protocol.