Skip to main content
Glama

iranti_attend

Check whether to inject stored memory before LLM responses and after tool calls to avoid redundant lookups and maintain conversation continuity.

Instructions

Ask Iranti whether memory should be injected before the next LLM turn. REQUIRED CALL SEQUENCE — follow this every turn, regardless of host:

  1. Call with phase='pre-response' BEFORE replying to the user.

  2. Call BEFORE any lookup tool (Read, Grep, Glob, Bash, WebSearch, WebFetch) where Iranti might already hold the answer. When you do, pass the pendingToolCall field so Iranti can derive entity hints from the tool target (file, URL, query) and preempt the lookup with stored facts.

  3. If you just ran Edit/Write/Bash/WebSearch/WebFetch since your last iranti_write, call iranti_write FIRST — then attend.

  4. Call with phase='post-response' AFTER every reply, without exception.

If the user is asking you to recall a remembered fact (preference, decision, blocker, next step, prior project detail), use this before answering instead of guessing or saying you do not know. Returns an injection decision plus any facts that should be added to context if relevant memory is missing. If no handshake has been performed yet for this agent in the current process, attend will auto-bootstrap the session first and report that in the result metadata. This is the minimum safe pre-reply call even when the host skipped handshake. Omitting currentContext falls back to the latest message only; pass the full visible context when available. For host compatibility, message is accepted as an alias for latestMessage. When phase='post-response', pass the assistant response so Iranti can persist strict continuity facts and shared checkpoint state before closing the turn.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
latestMessageNoThe full text of the latest user or assistant message — pass the complete response text, not a summary. When phase='post-response', this must be the full assistant response so Iranti can extract and persist durable facts (drafts, decisions, findings) from it.
messageNoAlias for latestMessage, accepted for host compatibility. Must be the full message text, not a summary.
currentContextNoCurrent visible context window.
entityHintsNoOptional entity hints in entityType/entityId format.
maxFactsNoMaximum facts to inject.
forceInjectNoForce a memory injection decision.
phaseNoCall phase: 'pre-response' before replying, 'post-response' after replying, 'mid-turn' for discovery-triggered re-attends within the same turn (e.g. after reading a new file or hitting a new entity). Mid-turn attends dedup facts already injected this turn, default to a smaller fact budget (3), and skip user-rule re-scans.
pendingToolCallNoDescribe the read-only tool call the agent is about to make. Iranti derives entity hints from the tool target (file path, URL, query) and surfaces any stored facts BEFORE the tool runs, so you can preempt redundant Read/Grep/Bash/WebFetch/WebSearch calls with stored memory. The result includes a toolCallGuidance field summarising what was derived.
toolResultNoM2: pass the raw output of a read-only tool call the agent just completed (Read/Grep/Bash/WebFetch/WebSearch). Iranti auto-extracts durable facts from the output and writes them with source="attendant_autowrite" so the next session does not need to re-run the same tool call. All autowrites share an autowriteBatchId and can be reverted as a group via `iranti revert-autowrite`. The response includes a toolResultExtraction field summarising what was extracted and written.
agentNoOverride the default agent id.
agentIdNoAlias for agent. Override the default agent id.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: it auto-bootstraps sessions if needed, handles fallbacks for parameters, deduplicates facts in mid-turn calls, and manages fact injection limits. However, it doesn't explicitly mention error handling or rate limits, leaving some gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose but becomes lengthy with detailed call sequences and parameter notes. While all information is relevant, it could be more streamlined; some sentences, like those about host compatibility, add necessary detail but reduce conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (11 parameters, no annotations, no output schema), the description is largely complete. It covers purpose, usage, and key behaviors, but lacks details on return values or error handling, which would be helpful for an agent invoking this tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 11 parameters thoroughly. The description adds minimal parameter semantics beyond the schema, such as noting that 'message' is an alias for 'latestMessage' and explaining the purpose of 'pendingToolCall' and 'toolResult' in context. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Ask Iranti whether memory should be injected before the next LLM turn.' It specifies the verb ('ask') and resource ('Iranti'), and distinguishes it from siblings by focusing on memory injection decisions rather than other memory operations like querying or writing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit, detailed guidelines on when to use this tool: it lists a required call sequence with three specific scenarios (before replying, before lookup tools, after certain writes) and adds a rule for recalling facts. It also distinguishes usage from alternatives by specifying it's for memory injection decisions, unlike sibling tools for querying or writing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/nfemmanuel/iranti'

If you have feedback or need assistance with the MCP directory API, please join our Discord server