The MCP Memory Service provides semantic memory and persistent storage for Claude Desktop. Key capabilities include:
- Store information with optional metadata tags
- Retrieve memories using semantic search with similarity scores
- Search by tags to find stored memories
- Time-based recall using natural language expressions
- Find memories via exact content match
- Manage duplicates by detecting and removing them
- Database operations: optimization, health monitoring, statistics
- Memory management: delete specific memories or tagged sets
- Data protection: create automatic backups
- Debug tools for analyzing retrieval processes
- Cross-platform compatibility with hardware-aware optimization
Mentioned as a potential cloud storage option where users should ensure sync is complete before accessing from another device.
MCP Memory Service
A universal MCP memory service providing semantic memory search, persistent storage, and autonomous memory consolidation for AI assistants and development environments. This Model Context Protocol server works with Claude Desktop, VS Code, Cursor, Continue, WindSurf, LM Studio, Zed, and 13+ AI applications, featuring vector database storage with SQLite-vec for fast semantic search and a revolutionary dream-inspired consolidation system that automatically organizes, compresses, and manages your AI conversation history over time, creating a self-evolving knowledge base for enhanced AI productivity.
Help
- Talk to the Repo with TalkToGitHub!
- Use Gitprobe to digg deeper: GitProbe!
📋 Table of Contents
🚀 Getting Started
- ⚡ Quick Start
- 🎯 Claude Code Commands (v2.2.0)
- 🚀 Remote MCP Memory Service (v4.0.0)
- 📦 Installation Methods
- ⚙️ Claude MCP Configuration
🌟 Features & Capabilities
🌐 Deployment & Multi-Client
📖 Documentation & Support
- 📝 Usage Guide
- ⚙️ Configuration Options
- 🖥️ Hardware Compatibility
- 🧪 Testing
- ❓ FAQ
- 🛠️ Troubleshooting
- 📚 Comprehensive Documentation
👨💻 Development & Community
🚀 Quick Start
Choose your preferred installation method to get started in under 5 minutes:
Option 1: Docker (Fastest - 2 minutes)
✅ Perfect for: Testing, production deployment, isolation
➡️ Complete Docker Setup
Option 2: Smithery (Simplest - 1 minute)
✅ Perfect for: Claude Desktop users, zero configuration
➡️ Smithery Details
Option 3: Python Installer (Most Flexible - 5 minutes)
✅ Perfect for: Developers, customization, multi-client setup
➡️ Full Installation Guide
🎯 NEW: Claude Code Commands (v2.2.0)
Get started in 2 minutes with direct memory commands!
✨ 5 conversational commands following CCPlugins pattern
🚀 Zero MCP server configuration required
🧠 Context-aware operations with automatic project detection
🎨 Professional interface with comprehensive guidance
➡️ Quick Start Guide | Full Integration Guide
🚀 NEW: Remote MCP Memory Service (v4.0.0)
Production-ready remote memory service with native MCP-over-HTTP protocol!
Remote Deployment
Deploy the memory service on any server for cross-device access:
Server Access Points:
- MCP Protocol:
http://your-server:8000/mcp
(for MCP clients) - Dashboard:
http://your-server:8000/
(web interface) - API Docs:
http://your-server:8000/api/docs
(interactive API)
Remote API Access
Connect any MCP client or tool to your remote memory service:
Key Benefits:
- ✅ Cross-Device Access: Connect from any device running Claude Code
- ✅ Native MCP Protocol: Standard JSON-RPC 2.0 implementation
- ✅ No Bridge Required: Direct HTTP/HTTPS connection
- ✅ Production Ready: Proven deployment at scale
Features
🌟 Universal AI Client Compatibility
Works with 13+ AI applications and development environments via the standard Model Context Protocol (MCP):
Client | Status | Configuration | Notes |
---|---|---|---|
Claude Desktop | ✅ Full | claude_desktop_config.json | Official MCP support |
Claude Code | ✅ Full | .claude.json | Optionally use Claude Commands instead (guide) |
Cursor | ✅ Full | .cursor/mcp.json | AI-powered IDE with MCP support |
WindSurf | ✅ Full | MCP config file | Codeium's AI IDE with built-in server management |
LM Studio | ✅ Full | MCP configuration | Enhanced compatibility with debug output |
Cline | ✅ Full | VS Code MCP config | VS Code extension, formerly Claude Dev |
RooCode | ✅ Full | IDE config | Full MCP client implementation |
Zed | ✅ Full | Built-in config | Native MCP support |
VS Code | ✅ Full | .vscode/mcp.json | Via MCP extension |
Continue IDE | ✅ Full | Continue configuration | Extension with MCP support |
Standard MCP Libraries | ✅ Full | Various | Python mcp , JavaScript SDK |
Custom MCP Clients | ✅ Full | Implementation-specific | Full protocol compliance |
HTTP API | ✅ Full | REST endpoints | Direct API access on port 8000 |
Core Benefits:
- 🔄 Cross-Client Memory Sharing: Use memories across all your AI tools
- 🚀 Universal Setup: Single installation works everywhere
- 🔌 Standard Protocol: Full MCP compliance ensures compatibility
- 🌐 Remote Access: HTTP/HTTPS support for distributed teams
➡️ Multi-Client Setup Guide | IDE Compatibility Details
🧠 Intelligent Memory System
Autonomous Memory Consolidation
- Dream-inspired processing with multi-layered time horizons (daily → yearly)
- Creative association discovery finding non-obvious connections between memories
- Semantic clustering automatically organizing related memories
- Intelligent compression preserving key information while reducing storage
- Controlled forgetting with safe archival and recovery systems
- Performance optimized for processing 10k+ memories efficiently
⚡ ONNX Runtime Support (NEW!)
- PyTorch-free operation using ONNX Runtime for embeddings
- Reduced dependencies (~500MB less disk space without PyTorch)
- Faster startup with pre-optimized ONNX models
- Automatic fallback to SentenceTransformers when needed
- Compatible models with the same all-MiniLM-L6-v2 embeddings
- Enable with:
export MCP_MEMORY_USE_ONNX=true
Advanced Memory Operations
- Semantic search using sentence transformers or ONNX embeddings
- Natural language time-based recall (e.g., "last week", "yesterday morning")
- Enhanced tag deletion system with flexible multi-tag support
- Tag-based memory retrieval system with OR/AND logic
- Exact match retrieval and duplicate detection
- Debug mode for similarity analysis and troubleshooting
Enhanced MCP Protocol Features (v4.1.0+)
- 📚 URI-based Resources:
memory://stats
,memory://tags
,memory://recent/{n}
,memory://search/{query}
- 📋 Guided Prompts: Interactive workflows (memory_review, memory_analysis, knowledge_export)
- 📊 Progress Tracking: Real-time notifications for long operations
- 🔄 Database Synchronization: Multi-node sync with Litestream integration
- 🎛️ Client Optimization: Auto-detection and optimization for Claude Desktop vs LM Studio
🚀 Deployment & Performance
Storage Backends
- 🪶 SQLite-vec (default): 10x faster startup, 75% less memory, zero network dependencies
- 📦 ChromaDB (legacy): Available for backward compatibility, deprecated in v6.0.0
Multi-Client Architecture
- Production FastAPI server with auto-generated SSL certificates
- mDNS Service Discovery for zero-configuration networking
- Server-Sent Events (SSE) with real-time updates
- API key authentication for secure deployments
- Cross-platform service installation (systemd, LaunchAgent, Windows Service)
Platform Support
- Cross-platform compatibility: Apple Silicon, Intel, Windows, Linux
- Hardware-aware optimizations: CUDA, MPS, DirectML, ROCm support
- Graceful fallbacks for limited hardware resources
- Container support with Docker images and Docker Compose configurations
Recent Highlights
🚀 Latest Features
- v5.0.2: ONNX Runtime support for PyTorch-free embeddings and SQLite-vec consolidation fixes
- v5.0.0: SQLite-vec is now the default backend - 10x faster startup, 75% less memory
- v4.5.0: Database synchronization for distributed memory access across multiple machines
- v4.1.0: Enhanced MCP resources, guided prompts, and progress tracking
- v3.0.0: Dream-inspired autonomous memory consolidation with exponential decay
- v2.2.0: Claude Code Commands for direct conversational memory operations
➡️ View Full Changelog for complete version history and detailed release notes
Installation Methods
For quick setup, see the ⚡ Quick Start section above.
🚀 Intelligent Installer (Recommended)
The new unified installer automatically detects your hardware and selects the optimal configuration:
🎯 Hardware-Specific Installation
For Intel Macs: For detailed setup instructions specific to Intel Macs, see our Intel Mac Setup Guide.
For Legacy Hardware (2013-2017 Intel Macs):
For Server/Headless Deployment:
For HTTP/SSE API Development:
For Migration from ChromaDB:
For Multi-Client Setup:
For Claude Code Commands:
🧠 What the Installer Does
- Hardware Detection: CPU, GPU, memory, and platform analysis
- Intelligent Backend Selection: SQLite-vec by default, with ChromaDB as legacy option
- Platform Optimization: macOS Intel fixes, Windows CUDA setup, Linux variations
- Dependency Management: Compatible PyTorch and ML library versions
- Auto-Configuration: Claude Desktop config and environment variables
- Migration Support: Seamless ChromaDB to SQLite-vec migration
📊 Storage Backend Selection
SQLite-vec (default): 10x faster startup, zero dependencies, recommended for all users
ChromaDB (deprecated): Legacy support only, will be removed in v6.0.0
➡️ Detailed Storage Backend Comparison
To explicitly select a backend during installation:
Docker Installation
Docker Hub (Recommended)
The easiest way to run the Memory Service is using our pre-built Docker images:
Docker Compose
We provide multiple Docker Compose configurations for different scenarios:
docker-compose.yml
- Standard configuration for MCP clientsdocker-compose.standalone.yml
- Standalone mode for testing/development (prevents boot loops)docker-compose.uv.yml
- Alternative configuration using UV package managerdocker-compose.pythonpath.yml
- Configuration with explicit PYTHONPATH settings
Building from Source
If you need to build the Docker image yourself:
uvx Installation
You can install and run the Memory Service using uvx for isolated execution:
Windows Installation (Special Case)
Windows users may encounter PyTorch installation issues due to platform-specific wheel availability. Use our Windows-specific installation script:
This script handles:
- Detecting CUDA availability and version
- Installing the appropriate PyTorch version from the correct index URL
- Installing other dependencies without conflicting with PyTorch
- Verifying the installation
Installing via Smithery
To install Memory Service for Claude Desktop automatically via Smithery:
Detailed Installation Guide
For comprehensive installation instructions and troubleshooting, see the Installation Guide.
Configuration
Basic Client Configuration
Claude Desktop Configuration
Add to your claude_desktop_config.json
file:
Windows-Specific Configuration
For Windows, use the wrapper script for PyTorch compatibility:
➡️ Multi-Client Setup Guide for Claude Desktop + VS Code + other MCP clients
Environment Variables
Core Configuration
HTTP API & Remote Access
Advanced Configuration
SSL/TLS Setup
For production deployments with HTTPS:
Local Development with mkcert:
Memory Consolidation
🌐 Multi-Client Deployment
NEW: Deploy MCP Memory Service for multiple clients sharing the same memory database!
🚀 Centralized Server Deployment (Recommended)
Perfect for distributed teams, multiple devices, or cloud deployment:
✅ Benefits:
- 🔄 Real-time sync across all clients via Server-Sent Events (SSE)
- 🌍 Cross-platform - works from any device with HTTP access
- 🔒 Secure with optional API key authentication
- 📈 Scalable - handles many concurrent clients
- ☁️ Cloud-ready - deploy on AWS, DigitalOcean, Docker, etc.
Access via:
- API Docs:
http://your-server:8000/api/docs
- Web Dashboard:
http://your-server:8000/
- REST API: All MCP operations available via HTTP
⚠️ Why NOT Cloud Storage (Dropbox/OneDrive/Google Drive)
Direct SQLite on cloud storage DOES NOT WORK for multi-client access:
❌ File locking conflicts - Cloud sync breaks SQLite's locking mechanism
❌ Data corruption - Incomplete syncs can corrupt the database
❌ Sync conflicts - Multiple clients create "conflicted copy" files
❌ Performance issues - Full database re-upload on every change
✅ Solution: Use centralized HTTP server deployment instead!
🔗 Local Multi-Client Coordination
For local development with multiple MCP clients (Claude Desktop + VS Code + Continue, etc.):
The MCP Memory Service features universal multi-client coordination for seamless concurrent access:
🚀 Integrated Setup (Recommended):
Key Benefits:
- ✅ Automatic Coordination: Intelligent detection of optimal access mode
- ✅ Universal Setup: Works with any MCP-compatible application
- ✅ Shared Memory: All clients access the same memory database
- ✅ No Lock Conflicts: WAL mode prevents database locking issues
- ✅ IDE-Agnostic: Switch between development tools while maintaining context
Supported Clients: Claude Desktop, Claude Code, VS Code, Continue IDE, Cursor, Cline, Zed, and more
📖 Complete Documentation
For detailed deployment guides, configuration options, and troubleshooting:
📚 Multi-Client Deployment Guide
Covers:
- Centralized HTTP/SSE Server setup and configuration
- Shared File Access for local networks (limited scenarios)
- Cloud Platform Deployment (AWS, DigitalOcean, Docker)
- Security & Authentication setup
- Performance Tuning for high-load environments
- Troubleshooting common multi-client issues
Usage Guide
For detailed instructions on how to interact with the memory service in Claude Desktop:
- Invocation Guide - Learn the specific keywords and phrases that trigger memory operations in Claude
- Installation Guide - Detailed setup instructions
- Demo Session Walkthrough - Real-world development session showcasing advanced features
The memory service is invoked through natural language commands in your conversations with Claude. For example:
- To store: "Please remember that my project deadline is May 15th."
- To retrieve: "Do you remember what I told you about my project deadline?"
Claude Code Commands Usage
With the optional Claude Code commands installed, you can also use direct command syntax:
- To delete: "Please forget what I told you about my address."
See the Invocation Guide for a complete list of commands and detailed usage examples.
Storage Backends
The MCP Memory Service supports multiple storage backends to suit different use cases:
SQLite-vec (Default - Recommended)
- Best for: All use cases - from personal to production deployments
- Features: Single-file database, 75% lower memory usage, zero network dependencies
- Memory usage: Minimal (~50MB for 1K memories)
- Setup: Automatically configured, works offline immediately
ChromaDB (Legacy - Deprecated)
⚠️ DEPRECATED: Will be removed in v6.0.0. Please migrate to SQLite-vec.
- Previous use cases: Large memory collections, advanced vector metrics
- Issues: Network dependencies, Hugging Face download failures, high resource usage
- Memory usage: Higher (~200MB for 1K memories)
- Migration: Run
python scripts/migrate_to_sqlite_vec.py
to migrate your data
Quick Setup for SQLite-vec
SQLite-vec with Optional PyTorch
The SQLite-vec backend now works with or without PyTorch installed:
- With PyTorch: Full functionality including embedding generation
- Without PyTorch: Basic functionality using pre-computed embeddings and ONNX runtime
- With Homebrew PyTorch: Integration with macOS Homebrew PyTorch installation
To install optional machine learning dependencies:
Homebrew PyTorch Integration
For macOS users who prefer to use Homebrew's PyTorch installation:
This integration offers several benefits:
- Uses Homebrew's isolated Python environment for PyTorch
- Avoids dependency conflicts with Claude Desktop
- Reduces memory usage in the main process
- Provides better stability in resource-constrained environments
For detailed documentation on the Homebrew PyTorch integration:
- Homebrew Integration Guide - Technical journey and solution architecture
Migration Between Backends
For detailed SQLite-vec setup, migration, and troubleshooting, see the SQLite-vec Backend Guide.
Memory Operations
The memory service provides the following operations through the MCP server:
Core Memory Operations
store_memory
- Store new information with optional tagsretrieve_memory
- Perform semantic search for relevant memoriesrecall_memory
- Retrieve memories using natural language time expressionssearch_by_tag
- Find memories using specific tagsexact_match_retrieve
- Find memories with exact content matchdebug_retrieve
- Retrieve memories with similarity scores
Database Management
create_backup
- Create database backupget_stats
- Get memory statisticsoptimize_db
- Optimize database performancecheck_database_health
- Get database health metricscheck_embedding_model
- Verify model status
Memory Management
delete_memory
- Delete specific memory by hashdelete_by_tag
- Enhanced: Delete memories with specific tag(s) - supports both single tags and multiple tagsdelete_by_tags
- New: Explicitly delete memories containing any of the specified tags (OR logic)delete_by_all_tags
- New: Delete memories containing all specified tags (AND logic)cleanup_duplicates
- Remove duplicate entries
API Consistency Improvements
Issue 5 Resolution: Enhanced tag deletion functionality for consistent API design.
- Before:
search_by_tag
accepted arrays,delete_by_tag
only accepted single strings - After: Both operations now support flexible tag handling
Example Usage
🧠 Dream-Inspired Memory Consolidation
The memory consolidation system operates autonomously in the background, inspired by how human memory works during sleep cycles. It automatically organizes, compresses, and manages your memories across multiple time horizons.
Quick Start
Enable consolidation with a single environment variable:
How It Works
- Daily consolidation (light processing): Updates memory relevance and basic organization
- Weekly consolidation: Discovers creative associations between memories
- Monthly consolidation: Performs semantic clustering and intelligent compression
- Quarterly/Yearly consolidation: Deep archival and long-term memory management
New MCP Tools Available
Once enabled, you get access to powerful new consolidation tools:
consolidate_memories
- Manually trigger consolidation for any time horizonget_consolidation_health
- Monitor system health and performanceget_consolidation_stats
- View processing statistics and insightsschedule_consolidation
- Configure autonomous schedulingget_memory_associations
- Explore discovered memory connectionsget_memory_clusters
- Browse semantic memory clustersget_consolidation_recommendations
- Get AI-powered memory management advice
Advanced Configuration
Fine-tune the consolidation system through environment variables:
Performance
- Designed to process 10k+ memories efficiently
- Automatic hardware optimization (CPU/GPU/MPS)
- Safe archival system - no data is ever permanently deleted
- Full recovery capabilities for all archived memories
🚀 Service Installation (NEW!)
Install MCP Memory Service as a native system service for automatic startup:
Cross-Platform Service Installer
The installer provides:
- ✅ Automatic OS detection (Windows, macOS, Linux)
- ✅ Native service integration (systemd, LaunchAgent, Windows Service)
- ✅ Automatic startup on boot/login
- ✅ Service management commands
- ✅ Secure API key generation
- ✅ Platform-specific optimizations
For detailed instructions, see the Service Installation Guide.
Hardware Compatibility
Platform | Architecture | Accelerator | Status | Notes |
---|---|---|---|---|
macOS | Apple Silicon (M1/M2/M3) | MPS | ✅ Fully supported | Best performance |
macOS | Apple Silicon under Rosetta 2 | CPU | ✅ Supported with fallbacks | Good performance |
macOS | Intel | CPU | ✅ Fully supported | Good with optimized settings |
Windows | x86_64 | CUDA | ✅ Fully supported | Best performance |
Windows | x86_64 | DirectML | ✅ Supported | Good performance |
Windows | x86_64 | CPU | ✅ Supported with fallbacks | Slower but works |
Linux | x86_64 | CUDA | ✅ Fully supported | Best performance |
Linux | x86_64 | ROCm | ✅ Supported | Good performance |
Linux | x86_64 | CPU | ✅ Supported with fallbacks | Slower but works |
Linux | ARM64 | CPU | ✅ Supported with fallbacks | Slower but works |
Any | Any | No PyTorch | ✅ Supported with SQLite-vec | Limited functionality, very lightweight |
Testing
FAQ
Can I use MCP Memory Service with multiple AI clients simultaneously?
Yes! The service features universal multi-client coordination for seamless concurrent access across Claude Desktop, VS Code, Continue, Cursor, and other MCP clients. See the Local Multi-Client Coordination section for details.
What's the difference between SQLite-vec and ChromaDB backends?
SQLite-vec (recommended): 10x faster startup, zero network dependencies, 75% less memory usage, single-file database
ChromaDB (deprecated): Legacy support only, requires network access for models, will be removed in v6.0.0
➡️ Detailed Backend Comparison
How do I migrate from ChromaDB to SQLite-vec?
Run the migration script to safely transfer your existing memories:
The process preserves all memories, tags, and metadata while improving performance.
Can I deploy MCP Memory Service on a remote server?
Yes! The service supports production deployment with HTTP/HTTPS server, API authentication, SSL certificates, and Docker containers. Perfect for teams and cross-device access.
Why does my installation fail on Apple Silicon Macs?
Use the intelligent installer which handles Apple Silicon optimizations automatically:
It detects MPS support, configures fallbacks, and selects compatible PyTorch versions.
How much memory and storage does the service use?
SQLite-vec: ~50MB RAM for 1K memories, single database file
ChromaDB: ~200MB RAM for 1K memories, multiple files
Storage scales linearly: ~1MB per 1000 memories with SQLite-vec.
Is my data secure and private?
Yes! All data is stored locally by default. For remote deployments, the service supports API key authentication, HTTPS encryption, and runs in user-space (not as root) for security.
Troubleshooting
See the Installation Guide and Troubleshooting Guide for detailed troubleshooting steps.
Quick Troubleshooting Tips
- Windows PyTorch errors: Use
python scripts/install_windows.py
- macOS Intel dependency conflicts: Use
python install.py --force-compatible-deps
- Recursion errors: Run
python scripts/fix_sitecustomize.py
- Environment verification: Run
python scripts/verify_environment_enhanced.py
- Memory issues: Set
MCP_MEMORY_BATCH_SIZE=4
and try a smaller model - Apple Silicon: Ensure Python 3.10+ built for ARM64, set
PYTORCH_ENABLE_MPS_FALLBACK=1
- Installation testing: Run
python scripts/test_installation.py
📚 Comprehensive Documentation
Installation & Setup
- Master Installation Guide - Complete installation guide with hardware-specific paths
- Storage Backend Guide ⭐ NEW - Comprehensive CLI options including multi-client setup
- Multi-Client Setup ⭐ NEW - Integrated setup for any MCP application
- Storage Backend Comparison - Detailed comparison and selection guide
- Migration Guide - ChromaDB to SQLite-vec migration instructions
Platform-Specific Guides
- Intel Mac Setup Guide - Comprehensive guide for Intel Mac users
- Legacy Mac Guide - Optimized for 2015 MacBook Pro and older Intel Macs
- Windows Setup - Windows-specific installation and troubleshooting
- Ubuntu Setup - Linux server installation guide
API & Integration
- HTTP/SSE API - New web interface documentation
- Claude Desktop Integration - Configuration examples
- Integrations - Third-party tools and extensions
Advanced Topics
- Multi-Client Architecture ⭐ NEW - Technical implementation details
- Homebrew PyTorch Integration - Using system PyTorch
- Docker Deployment - Container-based deployment
- Performance Optimization - Tuning for different hardware
Troubleshooting & Support
- General Troubleshooting - Common issues and solutions
- Hardware Compatibility - Compatibility matrix and known issues
Quick Commands
Project Structure
Development Guidelines
- Python 3.10+ with type hints
- Use dataclasses for models
- Triple-quoted docstrings for modules and functions
- Async/await pattern for all I/O operations
- Follow PEP 8 style guidelines
- Include tests for new features
Git Setup for Contributors
After cloning the repository, run the setup script to configure automated uv.lock
conflict resolution:
This enables automatic resolution of uv.lock
merge conflicts by:
- Using the incoming version to resolve conflicts
- Automatically running
uv sync
to regenerate the lock file - Ensuring consistent dependency resolution across all environments
The setup is required only once per clone and benefits all contributors by eliminating manual conflict resolution.
License
MIT License - See LICENSE file for details
Acknowledgments
- ChromaDB team for the vector database
- Sentence Transformers project for embedding models
- MCP project for the protocol specification
🎯 Why Sponsor MCP Memory Service?
🏆 In Production
- Deployed on Glama.ai
- Managing 300+ enterprise memories
- Processing queries in <1 second
Production Impact
- 319+ memories actively managed
- 828ms average query response time
- 100% cache hit ratio performance
- 20MB efficient vector storage
Developer Community
- Complete MCP protocol implementation
- Cross-platform compatibility
- React dashboard with real-time statistics
- Comprehensive documentation
Enterprise Features
- Semantic search with sentence-transformers
- Tag-based categorization system
- Automatic backup and optimization
- Health monitoring dashboard
Contact
Integrations
The MCP Memory Service can be extended with various tools and utilities. See Integrations for a list of available options, including:
- MCP Memory Dashboard - Web UI for browsing and managing memories
- Claude Memory Context - Inject memory context into Claude project instructions
This server cannot be installed
hybrid server
The server is able to function both locally and remotely, depending on the configuration or use case.
Provides semantic memory and persistent storage for Claude, leveraging ChromaDB and sentence transformers for enhanced search and retrieval capabilities.
- Help
- Features
- Installation
- Claude MCP Configuration
- Usage Guide
- Memory Operations
- Configuration Options
- Hardware Compatibility
- Testing
- Troubleshooting
- Project Structure
- Development Guidelines
- License
- Acknowledgments
- Contact
- Integrations
Related Resources
Related MCP Servers
- -securityFlicense-qualityProvides memory/knowledge graph storage capabilities using Supabase, enabling multiple Claude instances to safely share and maintain a knowledge graph with features like entity storage, concurrent access safety, and full text search.Last updated -6JavaScript
- AsecurityAlicenseAqualityA memory server for Claude that stores and retrieves knowledge graph data in DuckDB, enhancing performance and query capabilities for conversations with persistent user information.Last updated -81342TypeScriptMIT License
- AsecurityAlicenseAqualityProvides intelligent transcript processing capabilities for Claude, featuring natural formatting, contextual repair, and smart summarization powered by Deep Thinking LLMs.Last updated -415TypeScriptMIT License
- -securityFlicense-qualityCreates and maintains a semantic knowledge graph of code that allows maintaining context across sessions with Claude, providing advanced search capabilities without requiring the entire codebase in the context window.Last updated -5Python
Appeared in Searches
- Understanding the concept of 'handle' in different contexts
- A search for information about thinking or related concepts
- How to add documentation to a project or system
- An open-source vector database for similarity search and AI applications
- Why do models always lose memory and have poor memory retention?