Skip to main content
Glama

recall

Search saved memories using natural language to retrieve past decisions, facts, and context by semantic meaning. Returns ranked results by relevance.

Instructions

Search your memories by meaning, not keywords.

Uses semantic similarity to find the most relevant memories you've saved. Results are ranked by how closely they match your query.

Use this when:

  • Looking up a past decision: recall("what database did we choose?")

  • Checking what you know about a topic: recall("authentication setup")

  • Finding a specific fact: recall("API rate limits", category="fact")

  • Starting a new conversation: recall("current project status") to get context

Args: query: What you're looking for, in natural language. Longer, more specific queries produce better results than single words. limit: Maximum results to return (default 5, max 20). Use higher limits when exploring a broad topic. category: Optional filter — only return memories tagged with this category. One of: preference, fact, decision, idea, project, person, general.

Returns: A ranked list of matching memories with their IDs, content, category, importance, and similarity score. Returns an empty list if no matches found. Returns an error message if the server is unreachable.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYes
limitNo
categoryNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses ranking behavior (similarity score), return structure (IDs, content, category, etc.), empty result handling, and error conditions (server unreachable). Minor gap: doesn't explicitly confirm read-only nature, though implied by 'search'.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear sections (tagline, mechanism, usage cases, args, returns). Front-loaded with key differentiator ('meaning, not keywords'). Information-dense though slightly verbose; the four example use cases could potentially be condensed without loss.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive coverage for a semantic search tool. Despite presence of output schema (per context signals), description helpfully previews return structure. Explains parameter validation (max 20), category constraints, and result ranking. No significant gaps for this complexity level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema coverage, description fully compensates via detailed Args section. Query explains natural language input and length guidance. Limit specifies bounds (default 5, max 20) and usage context. Category enumerates all valid enum values (preference, fact, decision, idea, project, person, general) and filter semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Opens with specific verb+resource ('Search your memories') and immediately distinguishes the mechanism ('by meaning, not keywords'). Clearly differentiates from siblings like list_memories (which would scan) or remember (which creates) by emphasizing semantic similarity search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit 'Use this when:' section with four concrete scenarios including example query strings. Covers distinct use cases (decision lookup, topic check, fact finding, conversation startup) that clearly guide selection over alternatives like get_note or list_memories.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/PL-ODIN/astria-plugin'

If you have feedback or need assistance with the MCP directory API, please join our Discord server