Skip to main content
Glama

iranti_history

Retrieve the complete version history of a specific fact to analyze how it evolved over time, including past values, changes, and resolutions.

Instructions

Retrieve the full version history of a fact for an exact entity+key pair. Returns all archived past values plus the current value, ordered oldest-first. Each entry includes value, summary, confidence, source, validFrom, validUntil, isCurrent, archivedReason, and resolutionState. REQUIRED: call iranti_attend before this discovery tool so Iranti can decide whether memory should be injected first. Use this to understand how a fact evolved over time — decisions that changed, blockers that were resolved, values that were contested or superseded.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
entityYesEntity in entityType/entityId format.
keyYesFact key to retrieve history for.
limitNoMaximum number of entries to return (applied after sorting oldest-first).
includeExpiredNoInclude entries that expired without being superseded.
includeContestedNoInclude entries that were contested or escalated.
agentNoOverride the default agent id for protocol tracking.
agentIdNoAlias for agent. Override the default agent id for protocol tracking.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It does well by describing the return format ('ordered oldest-first'), listing the fields in each entry, and specifying a prerequisite action ('call iranti_attend before this'). However, it doesn't mention potential limitations like rate limits, authentication requirements, or error conditions, which would be helpful for a tool with no annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and appropriately sized. It starts with the core purpose, then describes the return format, specifies a critical prerequisite, and ends with usage context. Every sentence adds value with no redundancy or wasted words. The information is front-loaded with the most important details first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 7 parameters, no annotations, and no output schema, the description does a good job covering purpose, usage guidelines, and return format. It provides the prerequisite information and context about what historical insights to expect. However, without annotations or output schema, it could benefit from more behavioral details like error handling or performance characteristics to be fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents all 7 parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema descriptions. It mentions 'entity+key pair' which aligns with the required parameters, but provides no additional syntax, format, or usage details for any parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Retrieve the full version history') and resources ('fact for an exact entity+key pair'). It distinguishes from siblings by specifying this is for historical data retrieval rather than current state queries or write operations, and explicitly mentions the sibling tool 'iranti_attend' as a prerequisite.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines: it states when to use ('to understand how a fact evolved over time'), when not to use (implies not for current state queries), and names a specific alternative/prerequisite ('call iranti_attend before this discovery tool'). It also gives context about what types of historical changes to examine ('decisions that changed, blockers that were resolved, values that were contested or superseded').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/nfemmanuel/iranti'

If you have feedback or need assistance with the MCP directory API, please join our Discord server