Skip to main content
Glama

Local Mem0 MCP Server

A fully self-hosted Model Context Protocol (MCP) server that integrates Mem0 for persistent memory capabilities. Enables AI assistants like Claude to store and retrieve contextual information across conversations.

✨ Features

  • 🧠 Persistent Memory: Store and retrieve memories across conversations

  • πŸ”’ Fully Self-Hosted: No external APIs or cloud dependencies

  • 🐳 Containerized: Complete Docker deployment with one command

  • πŸš€ Easy Installation: Single script setup for Windows, Mac, and Linux

  • πŸ€– Local AI Models: Uses Ollama with phi3:mini and nomic-embed-text

  • πŸ“Š Vector Storage: PostgreSQL with pgvector for efficient memory search

  • πŸ”Œ MCP Compatible: Works with Claude Desktop and other MCP-capable AI tools

πŸš€ Quick Start

Prerequisites

Installation

Windows:

git clone https://github.com/Synapse-OS/local-mem0-mcp.git cd local-mem0-mcp install.bat

Mac/Linux:

git clone https://github.com/Synapse-OS/local-mem0-mcp.git cd local-mem0-mcp chmod +x install.sh ./install.sh

The installation will:

  1. Build the MCP server container

  2. Start PostgreSQL and Ollama services

  3. Download AI models (~2.5GB total)

  4. Configure Claude Desktop integration

  5. Test the installation

Testing

After installation and configuration:

  1. Restart Claude Desktop completely (close and reopen)

  2. Verify MCP server: Type /mcp - should list mem0-local as available

  3. Test memory storage: "Remember that I'm testing the MCP memory system today"

  4. Test memory retrieval: "What do you remember about me?"

  5. Verify persistence: Restart Claude Desktop and ask again - memories should persist

Troubleshooting MCP Connection:

  • If /mcp shows no servers, check the configuration file path and JSON syntax

  • Ensure Docker containers are running: docker ps

  • Check MCP server logs: docker logs mem0-mcp-server

πŸ—οΈ Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Claude β”‚ β”‚ MCP Server β”‚ β”‚ PostgreSQL β”‚ β”‚ Desktop │◄──►│ (FastMCP) │◄──►│ + pgvector β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Ollama β”‚ β”‚ phi3:mini + β”‚ β”‚ nomic-embed-text β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸ”§ Configuration

Claude Desktop MCP Configuration

After installation, configure Claude Desktop to use the MCP server:

Windows: Edit %APPDATA%\Claude\claude_desktop_config.json:

Mac: Edit ~/Library/Application Support/Claude/claude_desktop_config.json:

Linux: Edit ~/.config/Claude/claude_desktop_config.json:

Add this configuration:

{ "mcpServers": { "mem0-local": { "command": "docker", "args": [ "exec", "-i", "mem0-mcp-server", "python", "/app/src/server.py" ] } } }

System Configuration

The system is configured for local operation by default:

  • MCP Server: Runs in Docker container with STDIO transport

  • Database: PostgreSQL with pgvector on port 5432

  • AI Models: Local Ollama instance on port 11434

  • Memory Storage: User-isolated memories with vector embeddings

πŸ“‹ Available Memory Operations

  • add_memory: Store new memories

  • search_memories: Find relevant memories by query

  • get_all_memories: Retrieve all memories for a user

  • update_memory: Modify existing memories

  • delete_memory: Remove specific memories

  • delete_all_memories: Clear all memories for a user

  • get_memory_stats: Get memory statistics

πŸ” Troubleshooting

Check services:

docker ps

View logs:

docker-compose -f docker-compose.local.yml logs

Restart services:

docker-compose -f docker-compose.local.yml restart

Clean restart:

docker-compose -f docker-compose.local.yml down -v # Run install script again

🀝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ™ Acknowledgments

  • Mem0 - Memory management framework

  • FastMCP - MCP server implementation

  • Ollama - Local AI model inference

  • pgvector - Vector similarity search

-
security - not tested
A
license - permissive license
-
quality - not tested

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Hroerkr/mem0mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server