Skip to main content
Glama

get_low_confidence_mementos

Identify and review potentially obsolete knowledge in the memory system by finding memories with low confidence scores for quality assurance and cleanup.

Instructions

Find memories with low confidence scores.

Use for:

  • Identifying potentially obsolete knowledge

  • Periodic cleanup and verification

  • Quality assurance of the knowledge base

  • Finding memories that need review

Features:

  • Filter by confidence threshold (default: < 0.3)

  • Shows relationships causing low confidence

  • Includes memory details and last access time

  • Sorted by confidence (lowest first)

Returns:

  • List of low confidence relationships with associated memories

  • Memory details for both ends of each relationship

  • Confidence scores and last access times

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
thresholdNoConfidence threshold (default: 0.3)
limitNoMaximum number of results (default: 20)

Implementation Reference

  • The implementation of the get_low_confidence_mementos tool handler.
    async def handle_get_low_confidence_mementos(
        memory_db: SQLiteMemoryDatabase, arguments: Dict[str, Any]
    ) -> CallToolResult:
        """Handle get_low_confidence_memories tool call.
    
        Args:
            memory_db: Database instance for memory operations
            arguments: Tool arguments from MCP call containing:
                - threshold: Confidence threshold (default: 0.3)
                - limit: Maximum number of results to return (default: 20)
    
        Returns:
            CallToolResult with list of low confidence memories or error message
        """
        threshold = float(arguments.get("threshold", 0.3))
        limit = int(arguments.get("limit", 20))
    
        # Get low confidence relationships
        relationships = await memory_db.get_low_confidence_relationships(
            threshold=threshold, limit=limit
        )
    
        if not relationships:
            return CallToolResult(
                content=[
                    TextContent(
                        type="text",
                        text=f"No relationships found with confidence below {threshold}",
                    )
                ]
            )
    
        # Get unique memory IDs from relationships
        memory_ids = set()
    
        for rel in relationships:
            memory_ids.add(rel.from_memory_id)
            memory_ids.add(rel.to_memory_id)
    
        # Fetch memory details
        memories: List[Memory] = []
    
        for memory_id in list(memory_ids)[:limit]:
            try:
                memory = await memory_db.get_memory_by_id(memory_id)
    
                if memory:
                    memories.append(memory)
            except Exception as e:
                logger.warning(f"Failed to fetch memory {memory_id}: {e}")
    
        if not memories:
            return CallToolResult(
                content=[
                    TextContent(
                        type="text",
                        text=f"Found {len(relationships)} low confidence relationships but could not fetch memory details",
                    )
                ]
            )
    
        # Format results
        result_text = f"**Low Confidence Memories (threshold: confidence < {threshold})**\n\n"
        result_text += (
            f"Found {len(memories)} memories with low confidence relationships:\n\n"
        )
    
        for i, memory in enumerate(memories, 1):
            # Find relationships for this memory
            mem_relationships = [
                rel
                for rel in relationships
                if rel.from_memory_id == memory.id or rel.to_memory_id == memory.id
            ]
    
            result_text += f"**{i}. {memory.title}** (ID: {memory.id})\n"
            result_text += f"Type: {memory.type.value} | Importance: {memory.importance:.2f}\n"
    
            if memory.summary:
                result_text += f"Summary: {memory.summary[:150]}...\n"
    
            if mem_relationships:
                result_text += "Low confidence relationships:\n"
    
                for rel in mem_relationships[:3]:
                    other_id = (
                        rel.to_memory_id
                        if rel.from_memory_id == memory.id
                        else rel.from_memory_id
                    )
                    # "Last accessed" refers to the relationship, not the memory
                    last_acc = rel.properties.last_accessed
                    last_acc_str = (
                        last_acc.strftime("%Y-%m-%d")
                        if last_acc
                        else "never accessed via search"
                    )
                    result_text += (
                        f"  - {rel.type.value} → {other_id[:8]}... "
                        f"(relationship confidence: {rel.properties.confidence:.2f}, "
                        f"last accessed: {last_acc_str})\n"
                    )
    
            result_text += "\n"
    
        result_text += "**💡 Suggestions:**\n"
        result_text += "- Review these memories for accuracy\n"
        result_text += "- Use `adjust_memento_confidence` to update if still valid\n"
        result_text += "- Consider deleting if obsolete\n"
        result_text += "- Use `boost_memento_confidence` if recently validated\n"
        result_text += "\n"
        result_text += "â„šī¸ **Note:** 'relationship confidence' is the decay-tracked score on each "
        result_text += "edge — it decreases automatically when the relationship is not accessed. "
        result_text += "'last accessed' tracks when the relationship was last retrieved by a search, "
        result_text += "not when the memory was created.\n"
    
        return CallToolResult(content=[TextContent(type="text", text=result_text)])
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses key behaviors: default threshold < 0.3, sorting order (lowest first), returns relationships (not just mementos), includes last access time. Missing only minor details like rate limits or pagination.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with front-loaded purpose statement followed by organized bullet lists (Use for/Features/Returns). Every section adds value. Slightly verbose format compared to ultra-concise single-sentence descriptions, but appropriate given lack of output schema requiring behavioral description.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Compensates effectively for missing output schema by detailing return structure (relationships with associated memories, confidence scores, access times). Two simple optional parameters with complete schema coverage. No annotations but description adequately covers read-only safety.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage (threshold and limit documented), establishing baseline 3. Description reinforces threshold semantics (filtering behavior, default < 0.3) but does not add syntax details, validation nuances, or usage guidance beyond schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Opens with specific verb+resource: 'Find memories with low confidence scores.' Clearly distinguishes from sibling confidence tools (adjust, boost, apply decay) by focusing on retrieval/querying rather than mutation. Scope is precisely defined by the confidence threshold concept.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit 'Use for:' section with 4 concrete scenarios (obsolete knowledge identification, cleanup, QA, review needs). Offers clear context for invocation. Lacks explicit 'when not to use' or named alternatives (e.g., vs search_mementos), preventing a 5.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/x-hannibal/mcp-memento'

If you have feedback or need assistance with the MCP directory API, please join our Discord server