Skip to main content
Glama

recall

Search persistent memory by semantic similarity to retrieve ranked and scored memories for technical context before answering questions, editing files, or making architectural decisions. Filter by memory type like facts, sessions, or workflows.

Instructions

Search persistent memory by semantic similarity. Returns ranked HolographicBlock memories. WHEN TO CALL: Before answering any technical question, before editing a file, before making an architectural decision — check memory first. OUTPUT: Each result shows concept name, score (0-1), crs (confidence), and text snippet. Score >0.80 = strong match. Score 0.65-0.80 = relevant context. Score <0.65 = weak. CRS in result tells you how reliable that memory is: >=0.74 is grounded fact. ZEDOS FILTER GUIDE: 'praxis'=crystallized solutions that worked | 'declarative'=facts and architecture | 'episodic'=session logs | 'operational'=procedures and workflows | 'relation'=concept graph edges. TIME DECAY: Only use when user asks about past work (e.g. 'last week'). Use mcp_engram_read_concept after recall to get the full un-truncated text.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
kNoNumber of results to return (default: 5, max: 20)
queryYesNatural language query describing what you want to find
time_decayNoTRIGGER: Use this ONLY when the user asks a time-relative question like 'What did we work on last week?' or 'Find the old version of this file'. It applies a backwards unitary operator offset to traverse semantic age. Positive number = days in the past (e.g. 7.0 for a week ago).
zedos_filterNoOptional: filter by memory type. One of: 'declarative', 'episodic', 'operational', 'praxis', 'relation'. Leave unset for all types.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so description handles behavioral disclosure. It details output score interpretation, CRS confidence levels, and zedos_filter meanings. It does not explicitly state read-only nature, but given it's a search, this is minor.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured into labeled sections (WHEN TO CALL, OUTPUT, TIME DECAY, ZEDOS FILTER GUIDE). It is front-loaded with purpose and each sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and many siblings, the description is comprehensive: covers usage, output details, parameter guidance, and even links to a complementary tool (mcp_engram_read_concept). Only minor missing info on side effects.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers all four parameters but description adds significant value: explains time_decay usage condition, zedos_filter types, and query as natural language. This goes beyond schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches persistent memory by semantic similarity and returns ranked memories. It distinguishes itself from siblings like mcp_engram_query_with_momentum by being the basic semantic search tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly tells when to call (before answering questions, editing files, making decisions), when to use time_decay (only for time-relative queries), and gives a follow-up action (use mcp_engram_read_concept for full text).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/staticroostermedia-arch/engram'

If you have feedback or need assistance with the MCP directory API, please join our Discord server