Provides long-term memory storage for AI assistants with semantic search, enabling persistent storage of preferences, decisions, and context with relationship tracking between memories.
A Model Context Protocol server that wraps multiple backend servers for token-efficient tool discovery via lazy loading. It enables AI models to browse available servers and fetch specific tool schemas on-demand, significantly reducing initial context overhead.