Skip to main content
Glama

save_memory

Store new information in short-term memory with temporal decay, where frequently accessed content gets promoted to long-term storage automatically.

Instructions

Save a new memory to short-term storage.

The memory will have temporal decay applied and will be forgotten if not used
regularly. Frequently accessed memories may be promoted to long-term storage
automatically.

Args:
    content: The content to remember.
    tags: Tags for categorization.
    entities: Named entities in this memory.
    source: Source of the memory.
    context: Context when memory was created.
    meta: Additional custom metadata.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
contentYes
contextNo
entitiesNo
metaNo
sourceNo
tagsNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The core handler for the 'save_memory' MCP tool. Decorated with @mcp.tool() for automatic registration. Performs comprehensive input validation, optional NLP-based auto-enrichment (entities extraction, strength scoring), secret detection, optional embedding generation using SentenceTransformer, constructs Memory object, persists via db.save_memory(), and returns structured result.
    @mcp.tool()
    @time_operation("save_memory")
    def save_memory(
        content: str,
        tags: list[str] | None = None,
        entities: list[str] | None = None,
        source: str | None = None,
        context: str | None = None,
        meta: dict[str, Any] | None = None,
        strength: float | None = None,
    ) -> dict[str, Any]:
        """
        Save a new memory to short-term storage with automatic preprocessing.
    
        The memory will have temporal decay applied and will be forgotten if not used
        regularly. Frequently accessed memories may be promoted to long-term storage
        automatically.
    
        **Auto-enrichment (v0.6.0)**: If entities or strength are not provided, they will
        be automatically extracted/calculated from the content using natural language
        preprocessing. This makes save_memory "just work" for conversational use.
    
        Args:
            content: The content to remember (max 50,000 chars).
            tags: Tags for categorization (max 50 tags, each max 100 chars).
            entities: Named entities in this memory (max 100 entities).
                      If None, automatically extracted from content.
            source: Source of the memory (max 500 chars).
            context: Context when memory was created (max 1,000 chars).
            meta: Additional custom metadata.
            strength: Base strength multiplier (1.0-2.0). If None, automatically
                      calculated based on content importance.
    
        Raises:
            ValueError: If any input fails validation.
        """
        # Input validation
        content = cast(
            str, validate_string_length(content, MAX_CONTENT_LENGTH, "content", allow_empty=False)
        )
    
        if tags is not None:
            tags = validate_list_length(tags, MAX_TAGS_COUNT, "tags")
            tags = [validate_tag(tag, f"tags[{i}]") for i, tag in enumerate(tags)]
    
        if entities is not None:
            entities = validate_list_length(entities, MAX_ENTITIES_COUNT, "entities")
            entities = [validate_entity(entity, f"entities[{i}]") for i, entity in enumerate(entities)]
    
        if source is not None:
            source = cast(str, validate_string_length(source, 500, "source", allow_none=True))
    
        if context is not None:
            context = cast(str, validate_string_length(context, 1000, "context", allow_none=True))
    
        # Auto-enrichment preprocessing (v0.6.0)
        config = get_config()
        enrichment_applied = False
    
        if config.enable_preprocessing:
            from ..preprocessing import EntityExtractor, ImportanceScorer, PhraseDetector
    
            # Initialize preprocessing components (cached at module level)
            phrase_detector = PhraseDetector()
            entity_extractor = EntityExtractor()
            importance_scorer = ImportanceScorer()
    
            # Detect importance signals
            phrase_signals = phrase_detector.detect(content)
    
            # Auto-extract entities if not provided
            if entities is None:
                entities = entity_extractor.extract(content)
                enrichment_applied = True
    
            # Auto-calculate strength if not provided
            if strength is None:
                strength = importance_scorer.score(
                    content, entities=entities, importance_marker=phrase_signals["importance_marker"]
                )
                enrichment_applied = True
        else:
            # Default strength if preprocessing disabled
            if strength is None:
                strength = 1.0
    
        # Validate strength
        if strength is not None and (strength < 1.0 or strength > 2.0):
            raise ValueError("strength must be between 1.0 and 2.0")
    
        # Secrets detection (if enabled)
        config = get_config()
        if config.detect_secrets:
            matches = detect_secrets(content)
            if should_warn_about_secrets(matches):
                warning = format_secret_warning(matches)
                logger.warning(f"Secrets detected in memory content:\n{warning}")
                # Note: We still save the memory but warn the user
    
        # Create metadata
        metadata = MemoryMetadata(
            tags=tags or [],
            source=source,
            context=context,
            extra=meta or {},
        )
    
        # Generate ID and embedding
        memory_id = str(uuid.uuid4())
        embed = _generate_embedding(content)
    
        # Create memory
        now = int(time.time())
        memory = Memory(
            id=memory_id,
            content=content,
            meta=metadata,
            created_at=now,
            last_used=now,
            use_count=0,
            embed=embed,
            entities=entities or [],
            strength=strength if strength is not None else 1.0,
        )
    
        # Save to database
        db.save_memory(memory)
    
        return {
            "success": True,
            "memory_id": memory_id,
            "message": f"Memory saved with ID: {memory_id}",
            "has_embedding": embed is not None,
            "enrichment_applied": enrichment_applied,
            "auto_entities": len(entities or []) if enrichment_applied else 0,
            "calculated_strength": strength,
        }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the memory is stored in 'short-term storage' with 'temporal decay,' may be 'forgotten if not used regularly,' and 'frequently accessed memories may be promoted to long-term storage automatically.' This provides important context about persistence, lifecycle, and automatic promotion that isn't obvious from the tool name alone.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with three sentences followed by a structured parameter list. The first sentence states the core purpose, the next two explain behavioral context, and the Args section efficiently documents parameters. There's minimal waste, though the parameter explanations could be slightly more detailed without losing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 parameters, behavioral nuances) and the absence of annotations, the description does a reasonably complete job. It explains the tool's purpose, behavioral characteristics (decay, promotion), and documents all parameters. Since there's an output schema (mentioned in context signals), the description doesn't need to explain return values. The main gap is lack of explicit guidance on when to use versus sibling tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description includes an 'Args:' section that lists all 6 parameters with brief explanations, adding meaning beyond the input schema which has 0% description coverage. However, the explanations are minimal (e.g., 'Tags for categorization') and don't provide detailed semantics like format examples, constraints, or relationships between parameters. Since schema coverage is 0%, the description compensates somewhat but not fully.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Save a new memory') and resource ('to short-term storage'), providing a specific verb+resource combination. It distinguishes this from sibling tools like 'promote_memory' or 'touch_memory' by focusing on initial creation rather than manipulation of existing memories. However, it doesn't explicitly contrast with 'create_relation' which might also create new data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through phrases like 'temporal decay applied' and 'forgotten if not used regularly,' suggesting this is for transient storage. However, it doesn't explicitly state when to use this tool versus alternatives like 'promote_memory' (for long-term storage) or 'cluster_memories' (for grouping). No explicit when-not-to-use guidance or named alternatives are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/prefrontal-systems/mnemex'

If you have feedback or need assistance with the MCP directory API, please join our Discord server