Skip to main content
Glama

recall

Search past conversations and memories using time filters and categories to retrieve previous decisions, preferences, or facts from your workspace.

Instructions

Search past memories from previous sessions. Call this when the user asks 'what did I say about...', 'do you remember...', or references past conversations.

Supports time filters (since/until as ISO date, e.g. '2026-03-01') and category filter (decision/preference/fact).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYes
top_kNo
sinceNo
untilNo
categoryNo
projectNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the search functionality and filtering capabilities but doesn't mention important behavioral aspects like whether this is a read-only operation, what permissions are needed, how results are ranked, or what happens when no matches are found. The description adds some context but leaves significant behavioral questions unanswered.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with two sentences that each earn their place: the first establishes purpose and usage context, the second explains parameter capabilities. There's no wasted text, and the most important information (what the tool does and when to use it) comes first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 parameters, 1 required), no annotations, and the presence of an output schema, the description does a good job covering the essentials. It explains the core functionality, when to use it, and key parameter behaviors. The output schema means return values don't need explanation, but more behavioral context would improve completeness for a search tool with multiple filtering options.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description compensates well by explaining the semantics of key parameters: it clarifies that 'since/until' are time filters using ISO dates and that 'category' accepts specific values (decision/preference/fact). However, it doesn't explain the 'query', 'top_k', or 'project' parameters, leaving some parameter semantics undocumented.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Search') and resource ('past memories from previous sessions'). It distinguishes from siblings by focusing on conversational memory recall rather than document search, analytics, or memory management operations present in the sibling list.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance with concrete examples ('when the user asks 'what did I say about...', 'do you remember...', or references past conversations'). This gives clear context for when to invoke this tool versus alternatives like search_documents, deep_search, or unified_search from the sibling list.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/besslframework-stack/project-tessera'

If you have feedback or need assistance with the MCP directory API, please join our Discord server