mem0 Memory System
# ✨ mem0 MCP Server Environment Configuration Guide ✨
Made with ❤️ by Pink Pixel
This guide provides detailed instructions on how to configure the mem0 MCP server using environment variables.
## Environment Configuration Template
Create a `.env` file in the root directory of the project with the following template:
```bash
# ===== Provider API Keys =====
# Uncomment and fill in the API keys for the providers you want to use
# OpenAI API Key (for GPT models and embeddings)
# OPENAI_API_KEY=sk-your-openai-api-key
# Anthropic API Key (for Claude models)
# ANTHROPIC_API_KEY=sk-ant-your-anthropic-api-key
# Google API Key (for Gemini models)
# GOOGLE_API_KEY=your-google-api-key
# DeepSeek API Key
# DEEPSEEK_API_KEY=your-deepseek-api-key
# OpenRouter API Key (for accessing multiple models)
# OPENROUTER_API_KEY=your-openrouter-api-key
# Azure OpenAI API Key and Endpoint
# AZURE_OPENAI_API_KEY=your-azure-openai-api-key
# AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com
# ===== Application Settings =====
# Application name (used in logs and UI)
APP_NAME=Memory App
# ===== Ollama Settings (for local models) =====
# Base URL for Ollama API
OLLAMA_BASE_URL=http://localhost:11434
# Default Ollama model to use
OLLAMA_MODEL=llama3
# ===== Memory Storage Settings =====
# Directory where memories will be stored
# Use relative path: ./memory_data
# Or absolute path: /home/username/memory_data
# Or home directory: ~/memory_data
MEM0_DATA_DIR=./memory_data
# ===== Provider Settings =====
# Default provider for LLM operations
# Options: openai, anthropic, google, ollama, deepseek, openrouter, azure
MEM0_PROVIDER=ollama
# Default provider for embedding operations
# Options: openai, huggingface, ollama, google, azure
MEM0_EMBEDDING_PROVIDER=ollama
# ===== Advanced Settings =====
# Embedding dimensions (depends on the model)
# MEM0_EMBEDDING_DIMENSIONS=1536
# Memory chunk size for large text
# MEM0_CHUNK_SIZE=1000
# Memory chunk overlap
# MEM0_CHUNK_OVERLAP=200
# Default model for OpenAI
# MEM0_OPENAI_MODEL=gpt-4o
# Default model for Anthropic
# MEM0_ANTHROPIC_MODEL=claude-3-opus-20240229
# Default model for Google
# MEM0_GOOGLE_MODEL=gemini-1.5-pro
# Default embedding model for OpenAI
# MEM0_OPENAI_EMBEDDING_MODEL=text-embedding-3-large
# Default embedding model for Ollama
# MEM0_OLLAMA_EMBEDDING_MODEL=nomic-embed-text
```
## Configuration Explanations
### Provider API Keys
These keys are required to use the respective AI providers:
- **OPENAI_API_KEY**: Required for using OpenAI models (GPT-4, etc.) and embeddings
- **ANTHROPIC_API_KEY**: Required for using Anthropic models (Claude)
- **GOOGLE_API_KEY**: Required for using Google models (Gemini)
- **DEEPSEEK_API_KEY**: Required for using DeepSeek models
- **OPENROUTER_API_KEY**: Required for using OpenRouter (provides access to multiple models)
- **AZURE_OPENAI_API_KEY** and **AZURE_OPENAI_ENDPOINT**: Required for Azure OpenAI
### Ollama Settings (Local Models)
These settings are for using Ollama, which provides local AI models:
- **OLLAMA_BASE_URL**: The URL where Ollama is running (default: http://localhost:11434)
- **OLLAMA_MODEL**: The default Ollama model to use for LLM operations
### Memory Storage Settings
- **MEM0_DATA_DIR**: The directory where memories will be stored. This can be:
- A relative path (e.g., `./memory_data`)
- An absolute path (e.g., `/home/username/memory_data`)
- A path in the user's home directory (e.g., `~/memory_data`)
### Provider Settings
- **MEM0_PROVIDER**: The default provider for LLM operations
- **MEM0_EMBEDDING_PROVIDER**: The default provider for embedding operations
### Advanced Settings
These settings allow fine-tuning of the memory system:
- **MEM0_EMBEDDING_DIMENSIONS**: The dimensions of the embedding vectors
- **MEM0_CHUNK_SIZE**: The size of text chunks for large documents
- **MEM0_CHUNK_OVERLAP**: The overlap between chunks
- **MEM0_OPENAI_MODEL**: The default model for OpenAI
- **MEM0_ANTHROPIC_MODEL**: The default model for Anthropic
- **MEM0_GOOGLE_MODEL**: The default model for Google
- **MEM0_OPENAI_EMBEDDING_MODEL**: The default embedding model for OpenAI
- **MEM0_OLLAMA_EMBEDDING_MODEL**: The default embedding model for Ollama
## Configuration Priority
The configuration is loaded in the following order of priority:
1. Command-line arguments (highest priority)
2. Environment variables from `.env` file
3. Default values in the code (lowest priority)
This means that command-line arguments will override environment variables, which will override default values.
## Example Configurations
### Fully Local Setup (Ollama)
```bash
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_MODEL=llama3
MEM0_DATA_DIR=~/memory_data
MEM0_PROVIDER=ollama
MEM0_EMBEDDING_PROVIDER=ollama
MEM0_OLLAMA_EMBEDDING_MODEL=nomic-embed-text
```
### OpenAI Setup
```bash
OPENAI_API_KEY=sk-your-openai-api-key
MEM0_DATA_DIR=~/memory_data
MEM0_PROVIDER=openai
MEM0_EMBEDDING_PROVIDER=openai
MEM0_OPENAI_MODEL=gpt-4o
MEM0_OPENAI_EMBEDDING_MODEL=text-embedding-3-large
```
### Mixed Setup (Anthropic + OpenAI)
```bash
OPENAI_API_KEY=sk-your-openai-api-key
ANTHROPIC_API_KEY=sk-ant-your-anthropic-api-key
MEM0_DATA_DIR=~/memory_data
MEM0_PROVIDER=anthropic
MEM0_EMBEDDING_PROVIDER=openai
MEM0_ANTHROPIC_MODEL=claude-3-opus-20240229
MEM0_OPENAI_EMBEDDING_MODEL=text-embedding-3-large
```
## Testing Your Configuration
After setting up your `.env` file, you can test if it's properly loaded by starting the server:
```bash
python server.py
```
The server output should show the loaded configuration values.
Made with ❤️ by Pink Pixel | Dream it, Pixel it