Skip to main content
Glama

explain_memory

Debug unexpected memory search results by analyzing retrieval paths, freshness, scope filters, and matched terms to understand why memories matched your query.

Instructions

Explain why memories matched a query: retrieval path, freshness, scope, and matched terms. Read-only. Use when search results seem unexpected and you need to debug ranking or scope filtering.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesSearch query to explain — natural language or keywords, e.g. 'auth migration'
limitNoMaximum number of matched results to analyze and explain (default: 5)
scopeNoRestrict to a specific scope, e.g. 'project:myapp'. Omit to use default scope
sessionIdNoSession identifier to infer session-scoped search, e.g. 'abc123'
allScopesNoSet to true to search across all scopes instead of the default scope
categoryNoFilter results by memory category, e.g. 'preference', 'decision', 'fact'
profileNoRetrieval profile that tunes ranking: 'debug' for technical, 'fact-check' for precision
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Declares 'Read-only' safety trait and clarifies diagnostic output dimensions. Could improve by describing output format (text vs structured) or whether explanations are logged/persisted.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three tight clauses: purpose definition, safety declaration, and usage trigger. No filler words. Front-loaded with the core value proposition ('Explain why memories matched').

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a 7-parameter debugging tool without output schema. Mentions key output components (retrieval path, freshness, etc.) compensating for missing output schema. Missing only explicit mention of the contrasting tool (search_memory) and detailed output format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage with detailed enums and examples. Description focuses on behavior rather than parameters, which is appropriate given the schema completeness. Baseline 3 is correct as no additional param semantics are needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action ('Explain why memories matched') and details exactly what aspects get explained ('retrieval path, freshness, scope, and matched terms'). Distinguishes from sibling search_memory by emphasizing diagnostic explanation rather than retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use condition ('when search results seem unexpected and you need to debug ranking or scope filtering'). Implies search_memory is the regular alternative but does not explicitly name it, which would strengthen guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/AliceLJY/recallnest'

If you have feedback or need assistance with the MCP directory API, please join our Discord server