Skip to main content
Glama

Local Memory MCP Server

Local Memory MCP Server v2.3.0

Privacy-First AI Memory for True Intelligence

A production-ready Model Context Protocol (MCP) server that provides private, local memory for AI assistants. Your conversations, insights, and accumulated knowledge belong to YOU, but secured on your own machine, not in commercial corporate clouds.

✅ Production Status

Fully tested and production-ready with comprehensive test suite, robust error handling, performance optimization, and clean codebase.

🧠 Why Local Memory Matters

Your AI's memory IS your competitive advantage. Every interaction should compound into something uniquely yours. This transforms generic AI responses into personalized intelligence that grows with your specific needs, projects, and expertise.

🔐 Privacy & Ownership First

  • Your Data, Your Control: Every memory stays on YOUR machine
  • Zero Cloud Dependencies: No corporate surveillance or data mining
  • Compliance Ready: Meet GDPR, HIPAA, and enterprise security requirements

🎯 Intelligence That Grows

  • Cumulative Learning: AI remembers context across weeks, months, and years
  • Specialized Knowledge: Build domain-specific intelligence in your field
  • Pattern Recognition: Discover connections from accumulated knowledge
  • Contextual Understanding: AI that truly "knows" your projects and preferences

🛠️ Available Tools

Core Memory Management

💾 store_memory

Store new memories with contextual information and automatic AI embedding generation.

  • content (string): The content to store
  • importance (number, optional): Importance score (1-10, default: 5)
  • tags (string[], optional): Tags for categorization
  • session_id (string, optional): Session identifier
  • source (string, optional): Source of the information
🔍 search_memories

Search memories using full-text search or AI-powered semantic search.

  • query (string): Search query
  • use_ai (boolean, optional): Enable AI semantic search (default: false)
  • limit (number, optional): Maximum results (default: 10)
  • min_importance (number, optional): Minimum importance filter
  • session_id (string, optional): Filter by session
✏️ update_memory

Update an existing memory by ID.

  • id (string): Memory ID to update
  • content (string, optional): New content
  • importance (number, optional): New importance score
  • tags (string[], optional): New tags
🗑️ delete_memory

Delete a memory by ID.

  • id (string): Memory ID to delete

AI-Powered Intelligence

❓ ask_question

Ask natural language questions about your stored memories with AI-powered answers.

  • question (string): Your question about stored memories
  • session_id (string, optional): Limit context to specific session
  • context_limit (number, optional): Maximum memories for context (default: 5)

Returns: Detailed answer with confidence score and source memories

📊 summarize_memories

Generate AI-powered summaries and extract themes from memories.

  • session_id (string, optional): Summarize specific session
  • timeframe (string, optional): 'today', 'week', 'month', 'all' (default: 'all')
  • limit (number, optional): Maximum memories to analyze (default: 10)

Returns: Comprehensive summary with key themes and patterns

🔍 analyze_memories

Discover patterns, insights, and connections in your memory collection.

  • query (string): Analysis focus or question
  • analysis_type (string, optional): 'patterns', 'insights', 'trends', 'connections' (default: 'insights')
  • session_id (string, optional): Analyze specific session

Returns: Detailed analysis with discovered patterns and actionable insights

Relationship & Graph Features

🕸️ discover_relationships

AI-powered discovery of connections between memories.

  • memory_id (string, optional): Specific memory to analyze relationships for
  • session_id (string, optional): Filter by session
  • relationship_types (array, optional): Types to discover ('references', 'contradicts', 'expands', 'similar', 'sequential', 'causes', 'enables')
  • min_strength (number, optional): Minimum relationship strength (default: 0.5)
  • limit (number, optional): Maximum relationships to discover (default: 20)
🔗 create_relationship

Manually create relationships between two memories.

  • source_memory_id (string): ID of the source memory
  • target_memory_id (string): ID of the target memory
  • relationship_type (string): Type of relationship
  • strength (number, optional): Relationship strength (default: 0.8)
  • context (string, optional): Context or explanation
🗺️ map_memory_graph

Generate graph visualization of memory relationships.

  • memory_id (string): Central memory for the graph
  • depth (number, optional): Maximum depth to traverse (default: 2)
  • include_types (array, optional): Relationship types to include

Smart Categorization

🏷️ categorize_memory

Automatically categorize memories using AI analysis.

  • memory_id (string): Memory ID to categorize
  • suggested_categories (array, optional): Suggested category names
  • create_new_categories (boolean, optional): Create new categories if needed (default: true)
📁 create_category

Create hierarchical categories for organizing memories.

  • name (string): Category name
  • description (string): Category description
  • parent_category_id (string, optional): Parent category for hierarchy
  • confidence_threshold (number, optional): Auto-assignment threshold (default: 0.7)

Enhanced Temporal Analysis

📈 analyze_temporal_patterns

Analyze learning patterns and knowledge evolution over time.

  • session_id (string, optional): Filter by session
  • concept (string, optional): Specific concept to analyze
  • timeframe (string): 'week', 'month', 'quarter', 'year'
  • analysis_type (string): 'learning_progression', 'knowledge_gaps', 'concept_evolution'
📚 track_learning_progression

Track progression stages for specific concepts or skills.

  • concept (string): Concept or skill to track
  • session_id (string, optional): Filter by session
  • include_suggestions (boolean, optional): Include next step suggestions (default: true)
🔍 detect_knowledge_gaps

Identify knowledge gaps and suggest learning paths.

  • session_id (string, optional): Filter by session
  • focus_areas (array, optional): Specific areas to focus on
📅 generate_timeline_visualization

Create timeline visualization of learning journey.

  • memory_ids (array, optional): Specific memory IDs to include
  • session_id (string, optional): Filter by session
  • concept (string, optional): Focus on specific concept
  • start_date (string, optional): Timeline start date
  • end_date (string, optional): Timeline end date

Session Management

📋 list_sessions

List all available sessions with memory counts.

📊 get_session_stats

Get detailed statistics about stored memories.

  • session_id (string, optional): Specific session to analyze

Returns: Memory counts, average importance, common tags, and usage patterns

📦 Quick Setup

Install

# From source git clone https://github.com/danieleugenewilliams/local-memory-mcp.git cd local-memory-mcp npm install && npm run build # NPM (coming soon) npm install -g local-memory-mcp

Claude Desktop

Add to claude_desktop_config.json:

{ "mcpServers": { "local-memory": { "command": "npx", "args": ["local-memory-mcp", "--db-path", "~/.local-memory.db"] } } }

OpenCode

npx local-memory-mcp --db-path ~/.opencode-memory.db

Any MCP Tool

local-memory-mcp --db-path /path/to/memory.db --session-id your-session

🤖 AI Features Setup

Install Ollama

# Install Ollama curl -fsSL https://ollama.ai/install.sh | sh # Required models ollama pull nomic-embed-text # For semantic search ollama pull qwen2.5:7b # For Q&A and analysis

Model Options

ModelSizeUse CasePerformance
qwen2.5:7b~4.3GBRecommended⭐⭐⭐⭐⭐
qwen2.5:14b~8GBBest quality⭐⭐⭐⭐⭐
qwen2.5:3b~2GBBalanced⭐⭐⭐⭐
phi3.5:3.8b~2.2GBEfficient⭐⭐⭐

The server automatically detects Ollama and enables AI features. Without Ollama, it gracefully falls back to traditional text search.

💡 Usage Examples

Basic Operations

🗣️ "Remember that our API endpoint is https://api.example.com/v1" 🗣️ "Search for anything related to authentication" 🗣️ "What do you remember about our database schema?"

AI-Powered Features

🗣️ "Summarize what I've learned about TypeScript this week" 🗣️ "Analyze my coding patterns and suggest improvements" 🗣️ "Find relationships between my React and performance memories"

Advanced Analysis

🗣️ "Track my learning progression in machine learning" 🗣️ "What knowledge gaps do I have in backend development?" 🗣️ "Show me a timeline of my project decisions"

⚙️ Configuration

Command Line Options

  • --db-path: Database file path (default: ~/.local-memory.db)
  • --session-id: Session identifier for organizing memories
  • --ollama-url: Ollama server URL (default: http://localhost:11434)
  • --config: Configuration file path
  • --log-level: Logging level (debug, info, warn, error)

Configuration File (~/.local-memory/config.json)

{ "database": { "path": "~/.local-memory/memories.db", "backupInterval": 86400000 }, "ollama": { "enabled": true, "baseUrl": "http://localhost:11434", "embeddingModel": "nomic-embed-text", "chatModel": "qwen2.5:7b" }, "ai": { "maxContextMemories": 10, "minSimilarityThreshold": 0.3 } }

Environment Variables

export MEMORY_DB_PATH="/custom/path/memories.db" export OLLAMA_BASE_URL="http://localhost:11434" export OLLAMA_EMBEDDING_MODEL="nomic-embed-text" export OLLAMA_CHAT_MODEL="qwen2.5:7b"

🏗️ Development

npm run dev # Start development server npm run build # Build for production npm test # Run tests npm run lint # Lint code

🧪 Testing

Comprehensive test suite covering:

  • ✅ Memory storage and retrieval
  • ✅ Full-text and semantic search
  • ✅ Session management
  • ✅ AI integration features
  • ✅ Relationship discovery
  • ✅ Temporal analysis
npm test # Run all tests npm run test:watch # Watch mode npm test -- --coverage # Coverage report

🏛️ Architecture

src/ ├── index.ts # MCP server and CLI entry point ├── memory-store.ts # SQLite storage with caching ├── ollama-service.ts # AI service integration ├── types.ts # Schemas and TypeScript types ├── logger.ts # Structured logging ├── config.ts # Configuration management ├── performance.ts # Performance monitoring └── __tests__/ # Comprehensive test suite

Key Features:

  • SQLite + FTS5: Fast full-text search with vector embeddings
  • AI Integration: Ollama for semantic search and analysis
  • Performance: Caching, batch processing, monitoring
  • Type Safety: Full TypeScript with runtime validation
  • Production Ready: Error handling, logging, configuration

🔌 MCP Protocol Compatibility

Full Model Context Protocol (MCP) 0.5.0 compliance:

  • ✅ Stdio transport standard
  • ✅ All 18 memory management tools
  • ✅ Structured responses and error handling
  • ✅ Resource discovery and tool registration

Works with Claude Desktop, OpenCode, and any MCP-compatible tool.

🚀 Transform Your AI

Real Impact:

  • Development: AI remembers your architecture, patterns, and decisions
  • Research: Builds on previous insights and tracks learning progression
  • Analysis: Contextual responses based on your domain expertise
  • Strategy: Remembers successful approaches and methodologies

The Result: AI that evolves from generic responses to personalized intelligence built on YOUR accumulated knowledge.

🤝 Contributing

  1. Fork the repository
  2. Create feature branch (git checkout -b feature/amazing-feature)
  3. Add tests for new functionality
  4. Ensure tests pass (npm test)
  5. Commit changes (git commit -m 'Add amazing feature')
  6. Push and open Pull Request

📄 License

MIT License - see LICENSE file for details.

🔄 Changelog

v2.2.0 (Current)

  • ✨ Complete Ollama AI integration with semantic search
  • 🕸️ Relationship discovery and graph visualization
  • 🏷️ Smart categorization with AI analysis
  • 📈 Enhanced temporal analysis and learning progression tracking
  • 🧪 Comprehensive AI integration test suite

v2.1.0

  • 🚀 Production-ready release with performance optimizations
  • ✅ Comprehensive test suite and error handling
  • ⚙️ Configuration management system

v1.0.0

  • ✨ Initial MCP server implementation
  • 🔍 SQLite FTS5 full-text search
  • 📝 Session management system

🆘 Support

🌟 Why Choose Local Memory MCP?

Because your AI's intelligence should be as unique as you are.

  • 🔒 True Privacy: All data stays on your machine
  • Lightning Fast: Local SQLite + vector search
  • 🧠 Semantic Understanding: AI-powered memory retrieval
  • 📈 Compound Intelligence: Every interaction builds knowledge
  • 🔌 Universal Compatibility: Works with any MCP tool
  • 🛠️ Production Ready: Tested, optimized, and reliable

Own your AI's memory. Control your competitive advantage.


⭐ Star this project📖 Setup Guide🤝 Community

Related MCP Servers

  • A
    security
    A
    license
    A
    quality
    An implementation of persistent memory for Claude using a local knowledge graph, allowing the AI to remember information about users across conversations with customizable storage location.
    Last updated -
    11
    1,039
    52
    JavaScript
    MIT License
    • Apple
  • -
    security
    F
    license
    -
    quality
    Implements long-term memory capabilities for AI assistants using PostgreSQL with pgvector for efficient vector similarity search, enabling semantic retrieval of stored information.
    Last updated -
    37
    JavaScript
    • Apple
    • Linux
  • A
    security
    A
    license
    A
    quality
    Provides a structured documentation system for context preservation in AI assistant environments, helping users create and manage memory banks for their projects.
    Last updated -
    3
    66
    Python
    MIT License
    • Linux
    • Apple
  • A
    security
    A
    license
    A
    quality
    Enables AI assistants to maintain persistent project context across sessions by storing and retrieving structured information in markdown files organized in a memory bank directory.
    Last updated -
    4
    53
    JavaScript
    Apache 2.0
    • Linux
    • Apple

View all related MCP servers

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/danieleugenewilliams/local-memory-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server