Skip to main content
Glama

auto_capture

Extract preferences, facts, and decisions from conversation text using zero-LLM heuristics. Store memory-worthy signals as durable memories to maintain persistent context for AI agents.

Instructions

Extract memory-worthy items from a conversation turn using lightweight heuristics (zero LLM calls). Detects preferences, identity facts, decisions, corrections, explicit memory instructions, and workflow patterns. Items that pass salience filtering are stored as durable memories. Use this when you want to analyze a block of conversation text and automatically capture any signals worth remembering.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
textYesConversation text to analyze for memory-worthy signals
scopeYesRequired scope such as project:recallnest or session:abc123
sourceNoHow this memory was capturedagent
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full disclosure burden. It adds valuable implementation context ('zero LLM calls', 'salience filtering', 'durable memories') but omits safety-critical behavioral traits like whether the operation is idempotent, what happens when no items pass filtering, or potential rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is optimally concise with four sentences that progress logically: mechanism (sentence 1), detection targets (sentence 2), storage behavior (sentence 3), and usage trigger (sentence 4). No redundancy or filler content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 3 simple parameters and no output schema, the description adequately covers functional purpose and operational context. It could be improved by noting the return behavior (e.g., what the tool returns when memories are captured vs. when none are found), but this is not critical given the tool's straightforward nature.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description mentions 'analyze a block of conversation text' which aligns with the text parameter, but does not add semantic nuance beyond what the schema already provides (e.g., no format guidance for scope or usage patterns for source).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool extracts 'memory-worthy items' using 'lightweight heuristics' and lists specific detection targets (preferences, identity facts, etc.). However, it does not explicitly differentiate from siblings like store_memory or batch_store, leaving the agent to infer when auto-extraction is preferred over manual storage.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance with 'Use this when you want to analyze a block of conversation text...' which clarifies the trigger condition. However, it lacks 'when-not-to-use' guidance or named alternatives (e.g., 'use store_memory for manual capture instead').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/AliceLJY/recallnest'

If you have feedback or need assistance with the MCP directory API, please join our Discord server