Provides persistent storage of conversation memories using MongoDB database with support for saving, retrieving, adding, and clearing memory records.
Memory MCP
A Model Context Protocol (MCP) server for logging and retrieving memories from LLM conversations with intelligent context window caching capabilities.
Features
- Save Memories: Store memories from LLM conversations with timestamps and LLM identification
- Retrieve Memories: Get all stored memories with detailed metadata
- Add Memories: Append new memories without overwriting existing ones
- Clear Memories: Remove all stored memories
- Context Window Caching: Archive, retrieve, and summarize conversation context
- Relevance Scoring: Automatically score archived content relevance to current context
- Tag-based Search: Categorize and search context by tags
- Conversation Orchestration: External system to manage context window caching
- MongoDB Storage: Persistent storage using MongoDB database
Installation
- Install dependencies:
- Build the project:
Configuration
Set the MongoDB connection string via environment variable:
Default: mongodb://localhost:27017
Usage
Running the MCP Server
Start the MCP server:
Running the Conversation Orchestrator Demo
Try the interactive CLI demo:
The CLI demo allows you to:
- Add messages to simulate conversation
- See automatic archiving when context gets full
- Trigger manual archiving and retrieval
- Create summaries of archived content
- Monitor conversation status and get recommendations
Basic Memory Tools
- save-memories: Save all memories to the database, overwriting existing ones
memories
: Array of memory strings to savellm
: Name of the LLM (e.g., 'chatgpt', 'claude')userId
: Optional user identifier
- get-memories: Retrieve all memories from the database
- No parameters required
- add-memories: Add new memories to the database without overwriting existing ones
memories
: Array of memory strings to addllm
: Name of the LLM (e.g., 'chatgpt', 'claude')userId
: Optional user identifier
- clear-memories: Clear all memories from the database
- No parameters required
Context Window Caching Tools
- archive-context: Archive context messages for a conversation with tags and metadata
conversationId
: Unique identifier for the conversationcontextMessages
: Array of context messages to archivetags
: Tags for categorizing the archived contentllm
: Name of the LLM (e.g., 'chatgpt', 'claude')userId
: Optional user identifier
- retrieve-context: Retrieve relevant archived context for a conversation
conversationId
: Unique identifier for the conversationtags
: Optional tags to filter byminRelevanceScore
: Minimum relevance score (0-1, default: 0.1)limit
: Maximum number of items to return (default: 10)
- score-relevance: Score the relevance of archived context against current conversation context
conversationId
: Unique identifier for the conversationcurrentContext
: Current conversation context to compare againstllm
: Name of the LLM (e.g., 'chatgpt', 'claude')
- create-summary: Create a summary of context items and link them to the summary
conversationId
: Unique identifier for the conversationcontextItems
: Context items to summarizesummaryText
: Human-provided summary textllm
: Name of the LLM (e.g., 'chatgpt', 'claude')userId
: Optional user identifier
- get-conversation-summaries: Get all summaries for a specific conversation
conversationId
: Unique identifier for the conversation
- search-context-by-tags: Search archived context and summaries by tags
tags
: Tags to search for
Example Usage in LLM
Basic Memory Operations
- Save all memories (overwrites existing):
- Retrieve all memories:
Context Window Caching Workflow
- Archive context when window gets full:
- Score relevance of archived content:
- Retrieve relevant archived context:
- Create summaries for long conversations:
Conversation Orchestration System
The ConversationOrchestrator
class provides automatic context window management:
Key Features
- Automatic Archiving: Archives content when context usage reaches 80%
- Intelligent Retrieval: Retrieves relevant content when usage drops below 30%
- Relevance Scoring: Uses keyword overlap to score archived content relevance
- Smart Tagging: Automatically generates tags based on content keywords
- Conversation State Management: Tracks active conversations and their context
- Recommendations: Provides suggestions for optimal context management
Usage Example
Database Schema
Basic Memory Structure
Extended Memory Structure (Context Caching)
Context Window Caching Workflow
The orchestration system automatically:
- Monitors conversation length and context usage
- Archives content when context usage reaches 80%
- Scores relevance of archived content against current context
- Retrieves relevant content when usage drops below 30%
- Creates summaries to condense very long conversations
Key Features
- Conversation Grouping: All archived content is linked to specific conversation IDs
- Relevance Scoring: Simple keyword overlap scoring (can be enhanced with semantic similarity)
- Tag-based Organization: Categorize content for easy retrieval
- Summary Linking: Preserve links between summaries and original content
- Backward Compatibility: All existing memory functions work unchanged
- Automatic Management: No manual intervention required for basic operations
Development
To run in development mode:
To run the CLI demo:
License
ISC
This server cannot be installed
hybrid server
The server is able to function both locally and remotely, depending on the configuration or use case.
A Model Context Protocol server that allows saving, retrieving, adding, and clearing memories from LLM conversations with MongoDB persistence.
Related MCP Servers
- AsecurityAlicenseAqualityA Model Context Protocol server that provides access to MongoDB databases. This server enables LLMs to inspect collection schemas and execute read-only queries.Last updated -8377261TypeScriptMIT License
- -securityAlicense-qualityA Model Context Protocol server that enables LLMs to interact directly with MongoDB databases, allowing users to query collections, inspect schemas, and manage data through natural language.Last updated -1361TypeScriptMIT License
- -securityAlicense-qualityA Model Context Protocol server that enables LLMs to interact directly with MongoDB databases, allowing users to query collections, inspect schemas, and manage data through natural language.Last updated -136MIT License
- -securityAlicense-qualityA Model Context Protocol (MCP) server that enables LLMs to interact directly with MongoDB databases, allowing them to query collections, inspect schemas, and manage data seamlessly through natural language.Last updated -136MIT License