Skip to main content
Glama

promote_memory

Move high-value or frequently used memories to permanent long-term storage like Obsidian vaults, with options for automatic detection and preview mode.

Instructions

Promote high-value memories to long-term storage.

Memories with high scores or frequent usage are promoted to the Obsidian
vault (or other long-term storage) where they become permanent.

Args:
    memory_id: Specific memory ID to promote.
    auto_detect: Automatically detect promotion candidates.
    dry_run: Preview what would be promoted without promoting.
    target: Target for promotion (default: "obsidian").
    force: Force promotion even if criteria not met.

Returns:
    List of promoted memories and promotion statistics.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
auto_detectNo
dry_runNo
forceNo
memory_idNo
targetNoobsidian

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The promote_memory tool handler function, decorated with @mcp.tool() for registration. Handles promotion of high-value memories to long-term storage like Obsidian vault, with options for specific memory ID, auto-detection, dry-run, and force promotion.
    @mcp.tool()
    def promote_memory(
        memory_id: str | None = None,
        auto_detect: bool = False,
        dry_run: bool = False,
        target: str = "obsidian",
        force: bool = False,
    ) -> dict[str, Any]:
        """
        Promote high-value memories to long-term storage.
    
        Memories with high scores or frequent usage are promoted to the Obsidian
        vault (or other long-term storage) where they become permanent.
    
        Args:
            memory_id: Specific memory ID to promote (valid UUID).
            auto_detect: Automatically detect promotion candidates.
            dry_run: Preview what would be promoted without promoting.
            target: Storage backend for promotion. Default: "obsidian" (Obsidian-compatible markdown).
                    Note: This is a storage format, not a file path. Path configured via LTM_VAULT_PATH.
            force: Force promotion even if criteria not met.
    
        Returns:
            List of promoted memories and promotion statistics.
    
        Raises:
            ValueError: If memory_id is invalid or target is not supported.
        """
        # Input validation
        if memory_id is not None:
            memory_id = validate_uuid(memory_id, "memory_id")
    
        target = validate_target(target, "target")
    
        now = int(time.time())
        promoted_ids = []
        candidates = []
    
        if memory_id:
            memory = db.get_memory(memory_id)
            if memory is None:
                return {"success": False, "message": f"Memory not found: {memory_id}"}
            if memory.status == MemoryStatus.PROMOTED:
                return {
                    "success": False,
                    "message": f"Memory already promoted: {memory_id}",
                    "promoted_to": memory.promoted_to,
                }
    
            promote_it, reason, score = should_promote(memory, now)
            if not promote_it and not force:
                return {
                    "success": False,
                    "message": f"Memory does not meet promotion criteria: {reason}",
                    "score": round(score, 4),
                }
    
            candidates = [
                PromotionCandidate(
                    memory=memory,
                    reason=reason,
                    score=score,
                    use_count=memory.use_count,
                    age_days=calculate_memory_age(memory, now),
                )
            ]
        elif auto_detect:
            memories = db.list_memories(status=MemoryStatus.ACTIVE)
            for memory in memories:
                promote_it, reason, score = should_promote(memory, now)
                if promote_it:
                    candidates.append(
                        PromotionCandidate(
                            memory=memory,
                            reason=reason,
                            score=score,
                            use_count=memory.use_count,
                            age_days=calculate_memory_age(memory, now),
                        )
                    )
            candidates.sort(key=lambda c: c.score, reverse=True)
        else:
            return {
                "success": False,
                "message": "Must specify memory_id or set auto_detect=true",
            }
    
        if not dry_run and candidates:
            integration = BasicMemoryIntegration()
            config = get_config()
    
            # Initialize LTM index if vault is configured
            ltm_index = None
            if config.ltm_vault_path and config.ltm_vault_path.exists():
                ltm_index = LTMIndex(vault_path=config.ltm_vault_path)
                # Load existing index if it exists
                if ltm_index.index_path.exists():
                    ltm_index.load_index()
    
            for candidate in candidates:
                if target == "obsidian":
                    result = integration.promote_to_obsidian(candidate.memory)
                else:
                    return {"success": False, "message": f"Unknown target: {target}"}
    
                if result["success"]:
                    db.update_memory(
                        memory_id=candidate.memory.id,
                        status=MemoryStatus.PROMOTED,
                        promoted_at=now,
                        promoted_to=result.get("path"),
                    )
                    promoted_ids.append(candidate.memory.id)
    
                    # Incrementally update LTM index with newly promoted file
                    if ltm_index and result.get("full_path"):
                        try:
                            file_path = Path(result["full_path"])
                            ltm_index.add_document(file_path)
                        except Exception as e:
                            print(f"Warning: Failed to update LTM index for {result['path']}: {e}")
    
        return {
            "success": True,
            "dry_run": dry_run,
            "candidates_found": len(candidates),
            "promoted_count": len(promoted_ids),
            "promoted_ids": promoted_ids,
            "candidates": [
                {
                    "id": c.memory.id,
                    "content_preview": c.memory.content[:100],
                    "reason": c.reason,
                    "score": round(c.score, 4),
                    "use_count": c.use_count,
                    "age_days": round(c.age_days, 1),
                }
                for c in candidates[:10]
            ],
            "message": (
                f"{'Would promote' if dry_run else 'Promoted'} {len(promoted_ids)} memories to {target}"
            ),
        }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses that promotion makes memories 'permanent' and mentions a 'dry_run' option for previewing, which adds behavioral context. However, it doesn't cover critical aspects like permissions needed, rate limits, error handling, or what 'permanent' entails operationally (e.g., irreversible changes).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a purpose statement, elaboration, and clear sections for Args and Returns. It's appropriately sized with no redundant sentences, though the elaboration could be slightly more concise. Every sentence adds value, and it's front-loaded with the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 5 parameters with 0% schema coverage and no annotations, the description does a good job explaining parameter semantics and the tool's purpose. The presence of an output schema means the description doesn't need to detail return values, which it correctly omits. However, for a tool that makes memories 'permanent', more behavioral context (e.g., side effects, prerequisites) would enhance completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It provides meaningful semantics for all 5 parameters: 'memory_id' for specific promotion, 'auto_detect' for automatic candidate detection, 'dry_run' for previewing, 'target' for destination (default 'obsidian'), and 'force' to override criteria. This adds substantial value beyond the bare schema, though it doesn't detail parameter interactions (e.g., 'memory_id' vs 'auto_detect').

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Promote high-value memories to long-term storage' with specific criteria (high scores or frequent usage) and destination (Obsidian vault or other storage). It distinguishes from siblings like 'save_memory' or 'consolidate_memories' by focusing on promotion to permanent storage, though it doesn't explicitly contrast with them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when memories have high scores or frequent usage, but doesn't explicitly state when to use this tool versus alternatives like 'save_memory' or 'consolidate_memories'. It mentions 'auto_detect' for automatic candidate detection, providing some contextual guidance, but lacks clear exclusions or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/prefrontal-systems/mnemex'

If you have feedback or need assistance with the MCP directory API, please join our Discord server