Skip to main content
Glama

MCP AI Memory

by scanadi

MCP AI Memory

A production-ready Model Context Protocol (MCP) server for semantic memory management that enables AI agents to store, retrieve, and manage contextual knowledge across sessions.

Features

  • TypeScript - Full type safety with strict mode
  • PostgreSQL + pgvector - Vector similarity search with HNSW indexing
  • Kysely ORM - Type-safe SQL queries
  • Local Embeddings - Uses Transformers.js (no API calls)
  • Intelligent Caching - Redis + in-memory fallback for blazing fast performance
  • Multi-Agent Support - User context isolation
  • Memory Relationships - Graph structure for connected knowledge
  • Soft Deletes - Data recovery with deleted_at timestamps
  • Clustering - Automatic memory consolidation
  • Token Efficient - Embeddings removed from responses

Prerequisites

  • Node.js 18+ or Bun
  • PostgreSQL with pgvector extension
  • Redis (optional - falls back to in-memory cache if not available)

Installation

npm install -g mcp-ai-memory

From Source

  1. Install dependencies:
bun install
  1. Set up PostgreSQL with pgvector:
CREATE DATABASE mcp_ai_memory; \c mcp_ai_memory CREATE EXTENSION IF NOT EXISTS vector;
  1. Create environment file:
# Create .env with your database credentials touch .env
  1. Run migrations:
bun run migrate

Usage

Development

bun run dev

Production

bun run build bun run start

Claude Desktop Integration

Quick Setup (NPM)

Add to your Claude Desktop config (~/Library/Application Support/Claude/claude_desktop_config.json on macOS):

{ "mcpServers": { "memory": { "command": "npx", "args": ["-y", "mcp-ai-memory"], "env": { "DATABASE_URL": "postgresql://username:password@localhost:5432/memory_db" } } } }

With Optional Redis Cache

{ "mcpServers": { "memory": { "command": "npx", "args": ["-y", "mcp-ai-memory"], "env": { "DATABASE_URL": "postgresql://username:password@localhost:5432/memory_db", "REDIS_URL": "redis://localhost:6379", "EMBEDDING_MODEL": "Xenova/all-MiniLM-L6-v2", "LOG_LEVEL": "info" } } } }

Environment Variables

VariableDescriptionDefault
DATABASE_URLPostgreSQL connection stringRequired
REDIS_URLRedis connection string (optional)None - uses in-memory cache
EMBEDDING_MODELTransformers.js modelXenova/all-MiniLM-L6-v2
LOG_LEVELLogging levelinfo
CACHE_TTLCache TTL in seconds3600
MAX_MEMORIES_PER_QUERYMax results per search10
MIN_SIMILARITY_SCOREMin similarity threshold0.5

Available Tools

Core Operations

  • memory_store - Store memories with embeddings
  • memory_search - Semantic similarity search
  • memory_list - List memories with filtering
  • memory_update - Update memory metadata
  • memory_delete - Delete memories

Advanced Operations

  • memory_batch - Bulk store memories
  • memory_batch_delete - Bulk delete memories by IDs
  • memory_graph_search - Traverse relationships
  • memory_consolidate - Cluster similar memories
  • memory_stats - Database statistics

Resources

  • memory://stats - Database statistics
  • memory://types - Available memory types
  • memory://tags - All unique tags
  • memory://relationships - Memory relationships
  • memory://clusters - Memory clusters

Prompts

  • load-context - Load relevant context for a task
  • memory-summary - Generate topic summaries
  • conversation-context - Load conversation history

Architecture

src/ ├── server.ts # MCP server implementation ├── types/ # TypeScript definitions ├── schemas/ # Zod validation schemas ├── services/ # Business logic ├── database/ # Kysely migrations and client └── config/ # Configuration management

Environment Variables

# Required MEMORY_DB_URL=postgresql://user:password@localhost:5432/mcp_ai_memory # Optional - Caching (falls back to in-memory if Redis unavailable) REDIS_URL=redis://localhost:6379 CACHE_TTL=3600 # 1 hour default cache EMBEDDING_CACHE_TTL=86400 # 24 hours for embeddings SEARCH_CACHE_TTL=3600 # 1 hour for search results MEMORY_CACHE_TTL=7200 # 2 hours for individual memories # Optional - Model & Performance EMBEDDING_MODEL=Xenova/all-mpnet-base-v2 LOG_LEVEL=info MAX_CONTENT_SIZE=1048576 DEFAULT_SEARCH_LIMIT=20 DEFAULT_SIMILARITY_THRESHOLD=0.7 # Optional - Async Processing (requires Redis) ENABLE_ASYNC_PROCESSING=true # Enable background job processing BULL_CONCURRENCY=3 # Worker concurrency ENABLE_REDIS_CACHE=true # Enable Redis caching

Caching Architecture

The server implements a two-tier caching strategy:

  1. Redis Cache (if available) - Distributed, persistent caching
  2. In-Memory Cache (fallback) - Local NodeCache for when Redis is unavailable

Async Job Processing

When Redis is available and ENABLE_ASYNC_PROCESSING=true, the server uses BullMQ for background job processing:

Features

  • Async Embedding Generation: Offloads CPU-intensive embedding generation to background workers
  • Batch Import: Processes large memory imports without blocking the main server
  • Memory Consolidation: Runs clustering and merging operations in the background
  • Automatic Retries: Failed jobs are retried with exponential backoff
  • Dead Letter Queue: Permanently failed jobs are tracked for manual intervention

Running Workers

# Start all workers bun run workers # Or start individual workers bun run worker:embedding # Embedding generation worker bun run worker:batch # Batch import and consolidation worker # Test async processing bun run test:async

Queue Monitoring

The memory_stats tool includes queue statistics when async processing is enabled:

  • Active, waiting, completed, and failed job counts
  • Processing rates and performance metrics
  • Worker health status

Cache Invalidation

  • Memory updates/deletes automatically invalidate relevant caches
  • Search results are cached with query+filter combinations
  • Embeddings are cached for 24 hours (configurable)

Development

Type Checking

bun run typecheck

Linting

bun run lint

Implementation Status

✅ Fully Integrated Features

  • DBSCAN Clustering: Advanced clustering algorithm for memory consolidation
  • Smart Compression: Automatic compression for large memories (>100KB)
  • Context Window Management: Token counting and intelligent truncation
  • Input Sanitization: Comprehensive validation and sanitization
  • All Workers Active: Embedding, batch, and clustering workers all operational

Testing

The project includes a comprehensive test suite covering:

  • Memory service operations (store, search, update, delete)
  • Input validation and sanitization
  • Clustering and consolidation
  • Compression for large content

Run tests with bun test.

License

MIT

-
security - not tested
A
license - permissive license
-
quality - not tested

hybrid server

The server is able to function both locally and remotely, depending on the configuration or use case.

Enables AI agents to store, retrieve, and manage contextual knowledge across sessions using semantic search with PostgreSQL and vector embeddings. Supports memory relationships, clustering, multi-agent isolation, and intelligent caching for persistent conversational context.

  1. Features
    1. Prerequisites
      1. Installation
        1. NPM Package (Recommended for Claude Desktop)
        2. From Source
      2. Usage
        1. Development
        2. Production
      3. Claude Desktop Integration
        1. Quick Setup (NPM)
        2. With Optional Redis Cache
        3. Environment Variables
      4. Available Tools
        1. Core Operations
        2. Advanced Operations
      5. Resources
        1. Prompts
          1. Architecture
            1. Environment Variables
              1. Caching Architecture
                1. Async Job Processing
                  1. Features
                  2. Running Workers
                  3. Queue Monitoring
                  4. Cache Invalidation
                2. Development
                  1. Type Checking
                  2. Linting
                3. Implementation Status
                  1. ✅ Fully Integrated Features
                  2. Testing
                4. License

                  Related MCP Servers

                  • -
                    security
                    F
                    license
                    -
                    quality
                    Implements long-term memory capabilities for AI assistants using PostgreSQL with pgvector for efficient vector similarity search, enabling semantic retrieval of stored information.
                    Last updated -
                    40
                    • Apple
                    • Linux
                  • -
                    security
                    A
                    license
                    -
                    quality
                    Enables AI agents to interact with PostgreSQL databases through the Model Context Protocol, providing database schema exploration, table structure inspection, and SQL query execution capabilities.
                    Last updated -
                    13
                    MIT License
                    • Linux
                    • Apple
                  • -
                    security
                    A
                    license
                    -
                    quality
                    A lightweight server that provides persistent memory and context management for AI assistants using local vector storage and database, enabling efficient storage and retrieval of contextual information through semantic search and indexed retrieval.
                    Last updated -
                    1
                    MIT License
                  • -
                    security
                    F
                    license
                    -
                    quality
                    Enables AI assistants to store and retrieve long-term memories using PostgreSQL with vector similarity search. Supports semantic memory operations, tagging, and real-time updates for persistent learning across conversations.
                    Last updated -

                  View all related MCP servers

                  MCP directory API

                  We provide all the information about MCP servers via our MCP API.

                  curl -X GET 'https://glama.ai/api/mcp/v1/servers/scanadi/mcp-ai-memory'

                  If you have feedback or need assistance with the MCP directory API, please join our Discord server