Uses PostgreSQL with pgvector extension for vector similarity search and persistent storage of semantic memories with HNSW indexing
Provides distributed caching for embeddings, search results, and memory data, with automatic fallback to in-memory caching when unavailable
MCP AI Memory
A production-ready Model Context Protocol (MCP) server for semantic memory management that enables AI agents to store, retrieve, and manage contextual knowledge across sessions.
Features
- TypeScript - Full type safety with strict mode
- PostgreSQL + pgvector - Vector similarity search with HNSW indexing
- Kysely ORM - Type-safe SQL queries
- Local Embeddings - Uses Transformers.js (no API calls)
- Intelligent Caching - Redis + in-memory fallback for blazing fast performance
- Multi-Agent Support - User context isolation
- Memory Relationships - Graph structure for connected knowledge
- Soft Deletes - Data recovery with deleted_at timestamps
- Clustering - Automatic memory consolidation
- Token Efficient - Embeddings removed from responses
Prerequisites
- Node.js 18+ or Bun
- PostgreSQL with pgvector extension
- Redis (optional - falls back to in-memory cache if not available)
Installation
NPM Package (Recommended for Claude Desktop)
From Source
- Install dependencies:
- Set up PostgreSQL with pgvector:
- Create environment file:
- Run migrations:
Usage
Development
Production
Claude Desktop Integration
Quick Setup (NPM)
Add to your Claude Desktop config (~/Library/Application Support/Claude/claude_desktop_config.json
on macOS):
With Optional Redis Cache
Environment Variables
Variable | Description | Default |
---|---|---|
DATABASE_URL | PostgreSQL connection string | Required |
REDIS_URL | Redis connection string (optional) | None - uses in-memory cache |
EMBEDDING_MODEL | Transformers.js model | Xenova/all-MiniLM-L6-v2 |
LOG_LEVEL | Logging level | info |
CACHE_TTL | Cache TTL in seconds | 3600 |
MAX_MEMORIES_PER_QUERY | Max results per search | 10 |
MIN_SIMILARITY_SCORE | Min similarity threshold | 0.5 |
Available Tools
Core Operations
memory_store
- Store memories with embeddingsmemory_search
- Semantic similarity searchmemory_list
- List memories with filteringmemory_update
- Update memory metadatamemory_delete
- Delete memories
Advanced Operations
memory_batch
- Bulk store memoriesmemory_batch_delete
- Bulk delete memories by IDsmemory_graph_search
- Traverse relationshipsmemory_consolidate
- Cluster similar memoriesmemory_stats
- Database statistics
Resources
memory://stats
- Database statisticsmemory://types
- Available memory typesmemory://tags
- All unique tagsmemory://relationships
- Memory relationshipsmemory://clusters
- Memory clusters
Prompts
load-context
- Load relevant context for a taskmemory-summary
- Generate topic summariesconversation-context
- Load conversation history
Architecture
Environment Variables
Caching Architecture
The server implements a two-tier caching strategy:
- Redis Cache (if available) - Distributed, persistent caching
- In-Memory Cache (fallback) - Local NodeCache for when Redis is unavailable
Async Job Processing
When Redis is available and ENABLE_ASYNC_PROCESSING=true
, the server uses BullMQ for background job processing:
Features
- Async Embedding Generation: Offloads CPU-intensive embedding generation to background workers
- Batch Import: Processes large memory imports without blocking the main server
- Memory Consolidation: Runs clustering and merging operations in the background
- Automatic Retries: Failed jobs are retried with exponential backoff
- Dead Letter Queue: Permanently failed jobs are tracked for manual intervention
Running Workers
Queue Monitoring
The memory_stats
tool includes queue statistics when async processing is enabled:
- Active, waiting, completed, and failed job counts
- Processing rates and performance metrics
- Worker health status
Cache Invalidation
- Memory updates/deletes automatically invalidate relevant caches
- Search results are cached with query+filter combinations
- Embeddings are cached for 24 hours (configurable)
Development
Type Checking
Linting
Implementation Status
✅ Fully Integrated Features
- DBSCAN Clustering: Advanced clustering algorithm for memory consolidation
- Smart Compression: Automatic compression for large memories (>100KB)
- Context Window Management: Token counting and intelligent truncation
- Input Sanitization: Comprehensive validation and sanitization
- All Workers Active: Embedding, batch, and clustering workers all operational
Testing
The project includes a comprehensive test suite covering:
- Memory service operations (store, search, update, delete)
- Input validation and sanitization
- Clustering and consolidation
- Compression for large content
Run tests with bun test
.
License
MIT
This server cannot be installed
hybrid server
The server is able to function both locally and remotely, depending on the configuration or use case.
Enables AI agents to store, retrieve, and manage contextual knowledge across sessions using semantic search with PostgreSQL and vector embeddings. Supports memory relationships, clustering, multi-agent isolation, and intelligent caching for persistent conversational context.
Related MCP Servers
- -securityFlicense-qualityImplements long-term memory capabilities for AI assistants using PostgreSQL with pgvector for efficient vector similarity search, enabling semantic retrieval of stored information.Last updated -40
- -securityAlicense-qualityEnables AI agents to interact with PostgreSQL databases through the Model Context Protocol, providing database schema exploration, table structure inspection, and SQL query execution capabilities.Last updated -13MIT License
- -securityAlicense-qualityA lightweight server that provides persistent memory and context management for AI assistants using local vector storage and database, enabling efficient storage and retrieval of contextual information through semantic search and indexed retrieval.Last updated -1MIT License
- -securityFlicense-qualityEnables AI assistants to store and retrieve long-term memories using PostgreSQL with vector similarity search. Supports semantic memory operations, tagging, and real-time updates for persistent learning across conversations.Last updated -