Skip to main content
Glama

add_memory

Store conversation data and user preferences in memory to enable context recall across sessions, helping Claude maintain continuity and learn from past interactions.

Instructions

Store important information to memory - AUTO-STORE user preferences and decisions

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
messagesYes
user_idNo
metadataNo

Implementation Reference

  • MCP tool decorator registration for the 'add_memory' tool.
    @mcp.tool( name="add_memory", description="Store important information to memory - AUTO-STORE user preferences and decisions", )
  • The MCP 'add_memory' tool handler function, which proxies calls to memory_service.add_memory. The function signature defines the input schema via type hints.
    async def add_memory( messages: list[dict[str, str]], user_id: str | None = None, metadata: dict[str, Any] | None = None, ) -> dict[str, Any]: """ Store important information to memory for future reference. ## AUTONOMOUS STORAGE TRIGGERS ### HIGH Priority (Always Store Silently) - **User preferences**: "I prefer X", "I don't like Y", "I usually use Z" - **Project decisions**: "Let's use X for this project", "We decided on Y" - **Solution discoveries**: "That fixed it", "This approach worked", "The solution was X" - **Configuration details**: API keys, URLs, important file paths - **Error solutions**: Successfully resolved errors and their fixes ### MEDIUM Priority (Store with Brief Acknowledgment) - **Important context**: Project requirements, constraints, guidelines - **Learning insights**: "Now I understand X", "The key is Y" - **Workflow preferences**: How user likes to approach problems ### Autonomous Storage Examples ```python # User: "I prefer functional components over class components" # → AUTO: add_memory([{"role": "user", "content": "I prefer functional components..."}]) # → SILENT: No announcement, just store # User: "Perfect! That fixed the CORS issue" # → AUTO: add_memory([{"role": "assistant", "content": "CORS fixed by adding proxy config..."}]) # → METADATA: {"type": "solution", "issue": "CORS", "resolved": True} # User: "Let's use PostgreSQL for this project" # → AUTO: add_memory([{"role": "user", "content": "Let's use PostgreSQL..."}]) # → METADATA: {"type": "decision", "category": "database"} ``` ## Smart Metadata Generation Automatically generate metadata based on content patterns: - **"preference"**: Contains "prefer", "like", "don't like", "usually use" - **"solution"**: Contains "fixed", "solved", "worked", "solution was" - **"decision"**: Contains "let's use", "we'll go with", "decided on" - **"error"**: Contains "error", "issue", "problem", "bug" - **"configuration"**: Contains "config", "setup", "environment", "api key" ## Storage Best Practices - **Silent operation**: Don't announce routine storage unless explicitly requested - **Rich metadata**: Include type, category, project context automatically - **Concise content**: Store essential information, not full conversations - **Avoid duplicates**: Check if similar information already exists before storing Args: messages: List of message dictionaries, each with: - role: "user", "assistant", or "system" - content: The message text to store (keep concise but complete) user_id: User ID (optional, defaults to DEFAULT_USER_ID) metadata: Optional metadata dict for categorization - AUTO-GENERATED when not provided based on content analysis - SHOULD INCLUDE: type, category, project, resolved status Returns: Dictionary containing: - id: Unique identifier for the created memory - created_at: Timestamp of creation - status: "created" on success - message: Confirmation message """ try: result = await memory_service.add_memory( messages=messages, user_id=user_id, metadata=metadata ) logger.info("Memory added", memory_id=result.get("id")) return result except Exception as e: logger.error("Add failed", error=str(e)) raise RuntimeError(f"Add failed: {str(e)}") from e
  • Core helper method in MemoryService class that implements the add_memory logic using Mem0's AsyncMemoryClient.add method.
    async def add_memory( self, messages: list[dict[str, Any]], user_id: str | None = None, agent_id: str | None = None, run_id: str | None = None, categories: list[dict[str, str]] | None = None, metadata: dict[str, Any] | None = None, ) -> dict[str, Any]: """Add memory asynchronously. Args: messages: List of message dicts with 'role' and 'content' user_id: User identifier (defaults to settings.default_user_id) agent_id: Agent identifier (defaults to settings.default_agent_id) run_id: Session/run identifier for tracking conversations categories: List of custom categories with descriptions for organizing memories metadata: Optional metadata to attach to the memory Returns: Response from Mem0 API """ user_id = user_id or settings.default_user_id agent_id = agent_id or settings.default_agent_id categories = categories or settings.memory_categories # Build the add parameters add_params = { "messages": messages, "user_id": user_id, "agent_id": agent_id, "version": "v2", } # Add optional parameters if provided if run_id: add_params["run_id"] = run_id if categories: add_params["custom_categories"] = categories add_params["output_format"] = "v1.1" # Required for custom categories if metadata: add_params["metadata"] = metadata try: self._logger.info( "Adding memory", user_id=user_id, agent_id=agent_id, run_id=run_id, categories=categories, message_count=len(messages), ) result = await self.async_client.add(**add_params) self._logger.info( "Memory added successfully", user_id=user_id, agent_id=agent_id, run_id=run_id, categories=categories, memory_id=result.get("id"), ) return result except Exception as e: # Log the full error details for debugging error_details = str(e) if hasattr(e, "response") and hasattr(e.response, "text"): error_details = f"{str(e)} - Response: {e.response.text}" self._logger.error( "Failed to add memory", user_id=user_id, agent_id=agent_id, run_id=run_id, categories=categories, error=error_details, add_params=add_params, # Log the actual parameters being sent ) raise

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/terrymunro/mcp-mitm-mem0'

If you have feedback or need assistance with the MCP directory API, please join our Discord server