Uses PostgreSQL with pgvector extension for vector similarity search and persistent storage of semantic memories with HNSW indexing
Provides distributed caching for embeddings, search results, and memory data, with automatic fallback to in-memory caching when unavailable
MCP AI Memory
A production-ready Model Context Protocol (MCP) server for semantic memory management that enables AI agents to store, retrieve, and manage contextual knowledge across sessions.
📖 System Prompt Available: See SYSTEM_PROMPT.md for a comprehensive guide on how to instruct AI models to use this memory system effectively. This prompt helps models understand when and how to use memory tools, especially for proactive memory retrieval.
Features
Core Capabilities
TypeScript - Full type safety with strict mode
PostgreSQL + pgvector - Vector similarity search with HNSW indexing
Kysely ORM - Type-safe SQL queries
Local Embeddings - Uses Transformers.js (no API calls)
Intelligent Caching - Redis + in-memory fallback for blazing fast performance
Multi-Agent Support - User context isolation
Token Efficient - Embeddings removed from responses
Advanced Memory Management
Graph Relationships - Rich relationship types (references, contradicts, supports, extends, causes, precedes, etc.)
Graph Traversal - BFS/DFS algorithms with depth limits and filtering
Memory Decay - Automatic lifecycle management with exponential decay
Memory States - Active, dormant, archived, and expired states
Preservation - Protect important memories from decay
Soft Deletes - Data recovery with deleted_at timestamps
Clustering - Automatic memory consolidation
Compression - Automatic compression of archived memories
Prerequisites
Node.js 18+ or Bun
PostgreSQL with pgvector extension
Redis (optional - falls back to in-memory cache if not available)
Installation
NPM Package (Recommended for Claude Desktop)
From Source
Install dependencies:
Set up PostgreSQL with pgvector:
Create environment file:
Run migrations:
Usage
Development
Production
Troubleshooting
Embedding Dimension Mismatch Error
If you see an error like:
This occurs when the embedding model changes between sessions. To fix:
Option 1: Reset and Re-embed (Recommended for new installations)
# Clear existing memories and start fresh psql -d your_database -c "TRUNCATE TABLE memories CASCADE;"Option 2: Specify a Consistent Model Add
EMBEDDING_MODEL
to your Claude Desktop config:{ "mcpServers": { "memory": { "command": "npx", "args": ["-y", "mcp-ai-memory"], "env": { "MEMORY_DB_URL": "postgresql://...", "EMBEDDING_MODEL": "Xenova/all-mpnet-base-v2" } } } }Common models:
Xenova/all-mpnet-base-v2
(768 dimensions - default, best quality)Xenova/all-MiniLM-L6-v2
(384 dimensions - smaller/faster)
Option 3: Run Migration for Flexible Dimensions If you're using the source version:
bun run migrateThis allows mixing different embedding dimensions in the same database.
Database Connection Issues
Ensure your PostgreSQL has the pgvector extension:
Claude Desktop Integration
💡 For Best Results: Include the SYSTEM_PROMPT.md content in your Claude Desktop system prompt or initial conversation to help Claude understand how to use the memory tools effectively.
Quick Setup (NPM)
Add to your Claude Desktop config (~/Library/Application Support/Claude/claude_desktop_config.json
on macOS):
With Optional Redis Cache
Environment Variables
Variable | Description | Default |
| PostgreSQL connection string | Required |
| Redis connection string (optional) | None - uses in-memory cache |
| Transformers.js model |
|
| Logging level |
|
| Cache TTL in seconds |
|
| Max results per search |
|
| Min similarity threshold |
|
Available Tools
💡 Token Efficiency: Default limits are set to 10 results to optimize token usage. Increase only when needed.
Core Operations (Most Important)
memory_search
- SEARCH FIND RECALL - Search stored information using natural language (USE THIS FIRST! Default limit: 10)memory_list
- LIST BROWSE SHOW - List all memories chronologically (fallback when search fails, default limit: 10)memory_store
- STORE SAVE REMEMBER - Store new information after checking for duplicatesmemory_update
- UPDATE MODIFY EDIT - Update existing memory metadatamemory_delete
- DELETE REMOVE FORGET - Delete specific memories
Advanced Operations
memory_batch
- BATCH BULK IMPORT - Store multiple memories efficientlymemory_batch_delete
- Delete multiple memories at oncememory_graph_search
- GRAPH RELATED - Search with relationship traversal (alias for memory_traverse)memory_consolidate
- MERGE CLUSTER - Group similar memoriesmemory_stats
- STATS INFO - Database statisticsmemory_relate
- LINK CONNECT - Create memory relationshipsmemory_unrelate
- UNLINK DISCONNECT - Remove relationshipsmemory_get_relations
- Show all relationships for a memory
Graph & Decay Operations (New)
memory_traverse
- TRAVERSE EXPLORE - Traverse memory graph with BFS/DFS algorithmsmemory_graph_analysis
- ANALYZE CONNECTIONS - Analyze graph connectivity and relationship patternsmemory_decay_status
- DECAY STATUS - Check decay status of a memorymemory_preserve
- PRESERVE PROTECT - Preserve important memories from decay
Resources
memory://stats
- Database statisticsmemory://types
- Available memory typesmemory://tags
- All unique tagsmemory://relationships
- Memory relationshipsmemory://clusters
- Memory clusters
Prompts
load-context
- Load relevant context for a taskmemory-summary
- Generate topic summariesconversation-context
- Load conversation history
Architecture
Environment Variables
Caching Architecture
The server implements a two-tier caching strategy:
Redis Cache (if available) - Distributed, persistent caching
In-Memory Cache (fallback) - Local NodeCache for when Redis is unavailable
Async Job Processing
When Redis is available and ENABLE_ASYNC_PROCESSING=true
, the server uses BullMQ for background job processing:
Features
Async Embedding Generation: Offloads CPU-intensive embedding generation to background workers
Batch Import: Processes large memory imports without blocking the main server
Memory Consolidation: Runs clustering and merging operations in the background
Automatic Retries: Failed jobs are retried with exponential backoff
Dead Letter Queue: Permanently failed jobs are tracked for manual intervention
Running Workers
Queue Monitoring
The memory_stats
tool includes queue statistics when async processing is enabled:
Active, waiting, completed, and failed job counts
Processing rates and performance metrics
Worker health status
Cache Invalidation
Memory updates/deletes automatically invalidate relevant caches
Search results are cached with query+filter combinations
Embeddings are cached for 24 hours (configurable)
Development
Type Checking
Linting
Using with AI Models
System Prompt for Better Memory Usage
The memory tools include enhanced descriptions with keywords to help models understand when to use each tool. However, for best results with models like Gemma3, Qwen, or other open-source models:
Include the System Prompt: Copy the content from SYSTEM_PROMPT.md and include it in your initial conversation or system prompt
Key Behaviors to Reinforce:
Always use
memory_search
FIRST before any operationUse
memory_list
as a fallback when search returns no resultsSearch for user information at conversation start (e.g., "user name preferences")
Store structured JSON in the content field
Example Initial Prompt for Models
Implementation Status
✅ Fully Integrated Features
DBSCAN Clustering: Advanced clustering algorithm for memory consolidation
Smart Compression: Automatic compression for large memories (>100KB)
Context Window Management: Token counting and intelligent truncation
Input Sanitization: Comprehensive validation and sanitization
All Workers Active: Embedding, batch, and clustering workers all operational
Testing
The project includes a comprehensive test suite covering:
Memory service operations (store, search, update, delete)
Input validation and sanitization
Clustering and consolidation
Compression for large content
Run tests with bun test
.
License
MIT
This server cannot be installed
hybrid server
The server is able to function both locally and remotely, depending on the configuration or use case.
Enables AI agents to store, retrieve, and manage contextual knowledge across sessions using semantic search with PostgreSQL and vector embeddings. Supports memory relationships, clustering, multi-agent isolation, and intelligent caching for persistent conversational context.
Related MCP Servers
- -securityFlicense-qualityImplements long-term memory capabilities for AI assistants using PostgreSQL with pgvector for efficient vector similarity search, enabling semantic retrieval of stored information.Last updated -43
- -securityAlicense-qualityEnables AI agents to interact with PostgreSQL databases through the Model Context Protocol, providing database schema exploration, table structure inspection, and SQL query execution capabilities.Last updated -15MIT License
- -securityAlicense-qualityA lightweight server that provides persistent memory and context management for AI assistants using local vector storage and database, enabling efficient storage and retrieval of contextual information through semantic search and indexed retrieval.Last updated -1MIT License
- -securityFlicense-qualityEnables AI assistants to store and retrieve long-term memories using PostgreSQL with vector similarity search. Supports semantic memory operations, tagging, and real-time updates for persistent learning across conversations.Last updated -