Skip to main content
Glama

Server Details

Tenzro Ledger MCP: wallet, identity, payments, inference, staking, bridges, verification, agents.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
tenzro/tenzro-network
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.7/5 across 199 of 199 tools scored. Lowest: 2.4/5.

Server CoherenceB
Disambiguation3/5

Several tools have overlapping purposes: 'get_balance', 'get_token_balance', and 'token_balance' all retrieve TNZO balances with minor differences. Additionally, 'discover_agents' and 'list_providers' both deal with discovering network entities, which could cause confusion. Overall, most tools are distinct, but the overlaps lower disambiguation.

Naming Consistency4/5

Tools consistently use snake_case verb_noun patterns (e.g., 'create_token', 'list_models'). Some use protocol-specific prefixes like 'debridge_' or 'wormhole_', which is internally consistent. A few minor inconsistencies exist (e.g., 'debridge_create_tx' vs. 'create_escrow'), but overall naming is predictable.

Tool Count3/5

With 199 tools, the server is very large for an MCP server. While the broad domain coverage (ledger, AI, bridges, governance, etc.) partially justifies the count, there is bloat from redundant catalog/model listing pairs and multiple unload tools. The number feels excessive and could be streamlined.

Completeness3/5

The server covers a wide range of functionalities including tokens, NFTs, AI models, bridges, and governance. However, there are notable gaps: missing 'burn_nft', 'update_compliance', and delete operations for several entities (e.g., no 'delete_agent_template', 'delete_app'). Core workflows are present, but the surface is not fully complete.

Available Tools

209 tools
ap2_protocol_infoAInspect

Return AP2 protocol metadata: version, supported mandate types, supported VC formats, and issuer DID methods recognized by this node.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Description indicates a read-only operation and lists returned fields, but lacks details on authentication, performance, or side effects. Adequate for a simple query tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with verb 'Return', no extraneous words. Every word adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (no parameters, no output schema), the description fully informs the agent of what the tool returns. No missing context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Tool has zero parameters; input schema coverage is 100%. Description correctly adds no parameter details as none exist. Baseline 4 per rules for 0 parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool returns AP2 protocol metadata, listing specific fields (version, mandate types, VC formats, issuer DID methods). Distinguishes from sibling list tools which list resources or entities.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives. Usage is implied from description (get protocol metadata), but no exclusions or alternative tool mentions are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ap2_sign_mandateAInspect

Sign an AP2 mandate (Intent or Cart) with the auth-bound wallet's Ed25519 key, returning a verified Verifiable Digital Credential (VDC). Auth: DPoP+JWT mandatory. Wallet must be Ed25519. signer_did must match the wallet's controller DID.

ParametersJSON Schema
NameRequiredDescriptionDefault
mandateYesThe mandate object — IntentMandate or CartMandate, matching mandate_kind. Auth-bound wallet's Ed25519 key signs the canonical preimage.
signer_didYesSigner DID — must match the controller of the auth-bound wallet (principal for intent, agent for cart).
mandate_kindYesMandate kind: 'intent' (principal-signed authorization) or 'cart' (agent-signed commit)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description effectively conveys key behavioral traits: signing produces a verified VDC, requires specific auth and key type, and enforces DID matching. It does not mention side effects or failure modes, but the constraints are well covered.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with two sentences plus a terse constraint list. Every sentence is essential and front-loaded. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (multiple params, no output schema, no annotations), the description covers the main behavioral aspects, auth, and constraints. It could briefly explain what a VDC is or how it is verified, but overall it is sufficient for an agent in context with sibling tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with good parameter descriptions. The tool description adds extra context like VDC output and auth requirements, but these are not parameter-specific. It does not significantly improve parameter understanding beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Sign'), the resource ('AP2 mandate'), the key type ('Ed25519'), and the output ('verified VDC'). It distinguishes itself from sibling tools like ap2_validate_mandate_pair and ap2_verify_mandate by focusing on signing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit prerequisites: DPoP+JWT auth required, wallet must be Ed25519, and signer_did must match controller DID. However, it does not explicitly state when NOT to use this tool compared to alternatives, leaving slight ambiguity for the agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ap2_validate_mandate_pairAInspect

Validate an AP2 Intent+Cart mandate pair for consistency: ensures the cart references the intent, amounts/items match the intent's constraints, and both VDCs verify. When enforce_delegation=true, additionally cross-checks the agent's TDIP DelegationScope against the cart total (TDIP identifies. AP2 authorizes. Tenzro settles).

ParametersJSON Schema
NameRequiredDescriptionDefault
cart_vdcYesCart mandate VDC (agent-to-merchant execution) as JSON-LD VC envelope
intent_vdcYesIntent mandate VDC (user-to-agent authorization) as JSON-LD VC envelope
enforce_delegationNoIf true, also enforce the agent's TDIP DelegationScope against the cart total via IdentityRegistry::enforce_operation. Default: false (AP2-only validation).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description bears the full burden. It explains the validation steps and optional delegation cross-check, but it omits whether the tool is read-only or has side effects. It also does not disclose authentication needs or rate limits. The description is decent but could be more explicit about non-modification and other behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the main purpose and action, then adding the optional behavior. Every sentence earns its place with no redundancy. Very concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers the protocol roles (TDIP, AP2, Tenzro) and the validation logic, but it lacks any mention of the output format (e.g., returns boolean, error details). Without an output schema, this omission leaves the agent uncertain about the result. Also, behavioral details like whether it modifies state are missing.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds value beyond the schema by explaining the purpose of the mandate pair (ensuring cart references intent, amounts match constraints) and the behavior of enforce_delegation in detail. This enriches the parameter meaning.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool validates an AP2 Intent+Cart mandate pair for consistency, specifying what it checks (cart references intent, amounts/items match, VDC verification) and the optional delegation enforcement. It distinguishes from sibling 'ap2_verify_mandate' by focusing on pair validation rather than single mandate verification.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on when to use the tool (for validating a mandate pair) and explains the optional enforce_delegation parameter. However, it does not explicitly state when not to use it or mention alternatives like 'ap2_verify_mandate' for single mandates.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ap2_verify_mandateAInspect

Verify a single AP2 mandate (Verifiable Digital Credential). Checks the VDC proof, issuer, and schema for Intent, Cart, or Payment mandates per Google's AP2 spec.

ParametersJSON Schema
NameRequiredDescriptionDefault
vdcYesVerifiable Digital Credential (VDC) mandate object — the full JSON-LD VC envelope with proof
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden for behavioral disclosure. It states the verification checks (proof, issuer, schema) and references the AP2 spec, but does not explicitly confirm it is read-only, describe potential side effects, or outline error conditions. The term 'Verify' reasonably implies a read-only operation, but more explicit clarity would improve this score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence of 25 words with no redundant information. Every word contributes to conveying the tool's purpose effectively.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, the description adequately explains what it verifies and the standard followed. Missing details like return type or prerequisite validation are minor gaps, but overall it is sufficiently complete for an AI agent to understand and invoke correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a single parameter 'vdc' described as 'the full JSON-LD VC envelope with proof'. The tool description adds no additional explanation or constraints beyond what the schema already provides. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Verify', the resource 'single AP2 mandate (Verifiable Digital Credential)', and specifies what is checked (proof, issuer, schema). It also mentions the scope (Intent, Cart, or Payment mandates per Google's AP2 spec) and implicitly distinguishes from sibling tools like ap2_validate_mandate_pair by indicating it handles a single mandate.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for single mandate verification via the word 'single', which contrasts with the sibling ap2_validate_mandate_pair likely used for pairs. However, it lacks explicit 'when to use' or 'when not to use' guidance, and does not mention alternatives or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

assign_taskAInspect

Assign a task to a specific agent on the Tenzro Task Marketplace. Moves task from open to assigned state and notifies the agent. Returns the updated task record.

ParametersJSON Schema
NameRequiredDescriptionDefault
task_idYesTask ID to assign
agent_didYesDID of the agent being assigned the task
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose side effects and behavior. It states state change (open to assigned) and notification, which is adequate but minimal. Missing details on required permissions, idempotency, error conditions, or whether the task must be in the open state.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with no fluff. First sentence states the action and purpose, second explains the state change and side effect, third describes the return value. Front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple mutation with two parameters and no output schema, the description covers the core purpose, effect, and return. Lacks preconditions or error handling, but overall sufficient for selection and invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear parameter descriptions. The description does not add meaningful extra meaning beyond the schema; it only repeats 'task' and 'agent'. The mention of 'notifies the agent' weakly ties to agent_did.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (assign), the resource (task), the target (specific agent), the marketplace context, the state change (open to assigned), and the side effect (notifies the agent). This distinguishes it from sibling tools like post_task, complete_task, or delegate_task.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by mentioning the state transition from open to assigned, but it does not explicitly state when to use this tool versus alternatives (e.g., delegate_task) or provide any prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

authorize_crosschain_bridgeAInspect

Authorize a bridge address for ERC-7802 crosschain mint and burn operations. Only authorized bridges can mint/burn tokens for cross-chain transfers. Sets daily mint and burn limits for rate limiting.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesHuman-readable bridge name (e.g. 'LayerZero V2', 'Chainlink CCIP')
bridgeYesHex-encoded bridge address to authorize
daily_burn_limitYesDaily burn limit in base units
daily_mint_limitYesDaily mint limit in base units (e.g. 1000000000000000000000 for 1000 TNZO)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description discloses key behavioral aspects: it sets daily limits for rate limiting and emphasizes only authorized bridges can mint/burn. However, it does not mention reversibility or permission requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose and clear explanation of significance. No redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description lacks return value information but sufficiently explains the tool's impact. It could mention that this is a one-time setup or that limits are updateable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds minor value by giving an example for daily_mint_limit, but otherwise repeats schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool authorizes a bridge address for ERC-7802 crosschain mint and burn operations, which is specific and distinguishes it from sibling bridge tools that perform transfers or quotes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It implies that authorization must occur before using crosschain_mint or crosschain_burn, but does not explicitly state when to use or provide exclusions or alternative tools. No mention of prerequisites like admin privileges.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

authorize_sessionAInspect

Authorize a temporary session for a wallet with specific allowed operations and a time limit. Returns a session ID and expiry timestamp.

ParametersJSON Schema
NameRequiredDescriptionDefault
wallet_idYesWallet ID (UUID) to create a session for
operationsYesAllowed operations during the session (e.g. ['transfer', 'stake', 'inference'])
duration_secsYesSession duration in seconds (e.g. 3600 for 1 hour)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses creation of a session (a state change) but lacks details on side effects (e.g., invalidating previous sessions), authentication requirements, or idempotency. Return format of session ID/expiry is vague.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is concise: two sentences. First sentence states purpose and scope, second sentence states return value. No fluff or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, description mentions session ID and expiry but lacks details on formats or usage. No prerequisites or limits stated. Sibling revoke_session is complementary but not referenced. Adequate but incomplete for a security-sensitive action.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions for each parameter. The tool description adds minimal extra value (e.g., 'specific allowed operations' and 'time limit'), but since schema already documents parameters well, baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states verb 'Authorize', resource 'temporary session for a wallet', and specifics: allowed operations and time limit. It also mentions return value (session ID and expiry), distinguishing it from sibling revoke_session.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Usage is implied by the description (create session with restricted ops), but there is no explicit guidance on when to use versus alternatives like revoke_session or create_mpc_wallet. No when-not-to-use conditions provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

bridge_quoteAInspect

Get a bridge quote without executing the transfer. Returns estimated output amount, fees, estimated time, and recommended route. Useful for previewing costs before committing to a bridge transfer.

ParametersJSON Schema
NameRequiredDescriptionDefault
tokenYesToken to bridge (e.g. 'TNZO', 'USDC', 'ETH')
amountYesAmount in base units
protocolNoPreferred bridge protocol: 'layerzero', 'ccip', 'debridge' (optional — auto-selects best route if omitted)
to_chainYesDestination chain
from_chainYesSource chain (e.g. 'tenzro', 'ethereum', 'solana', 'base')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must carry full burden. It states the tool returns estimates and does not execute, but lacks explicit mention of read-only nature, auth requirements, or any side effects. Partially transparent but could be more comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with the verb 'Get', and no wasted words. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a moderate-complexity tool with no output schema, the description covers purpose, return values, and usage guidance. Missing some edge cases but adequate for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with descriptions for all parameters. The tool description adds no extra meaning beyond what schema provides, so baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it gets a bridge quote without executing transfer, and lists return fields. Differentiates from sibling tools like bridge_tokens by emphasizing 'without executing', but does not explicitly name the sibling tool for contrast.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'useful for previewing costs before committing to a bridge transfer', providing clear when-to-use context. Though it does not list when not to use, the phrase 'without executing' implies it is not for actual transfer.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

bridge_tokensBInspect

Bridge tokens between blockchains. Supports routes between Tenzro, Ethereum, Solana, and Base via LayerZero, Chainlink CCIP, and deBridge adapters

ParametersJSON Schema
NameRequiredDescriptionDefault
assetYesAsset to bridge (e.g. 'TNZO', 'USDC', 'ETH')
amountYesAmount to bridge in base units
senderYesHex-encoded sender address on source chain
recipientYesHex-encoded recipient address on destination chain
dest_chainYesDestination chain
source_chainYesSource chain (e.g. 'tenzro', 'ethereum', 'solana', 'base')
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must convey behavioral traits. Mentions adapters but lacks disclosure on fees, slippage, finality, failure modes, or destructive potential. Agent cannot assess risks or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences: first states core action, second lists supported chains and protocols. No fluff, every word adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Missing output schema and no description of return value (transaction hash, status, error). Complex bridging operation with no guidance on expected response or failure handling.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema descriptions cover all 6 parameters (100% coverage). Description does not add extra meaning beyond what schema already provides, so baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states verb 'Bridge', resource 'tokens', and specifies supported chains (Tenzro, Ethereum, Solana, Base) and adapters (LayerZero, Chainlink CCIP, deBridge). Distinguishes from sibling bridge tools like bridge_quote and bridge_with_hook.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implied usage for cross-chain token transfers, but no explicit guidance on when to use this tool over siblings like wormhole_bridge, bridge_with_hook, or debridge_create_tx. Does not provide when-not or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

bridge_with_hookAInspect

Execute a bridge transfer with a deBridge post-fulfillment hook. After the tokens arrive on the destination chain, the hook_target contract is called with hook_calldata. Enables composable cross-chain operations (e.g., bridge + swap, bridge + stake).

ParametersJSON Schema
NameRequiredDescriptionDefault
tokenYesToken to bridge (e.g. 'USDC', 'ETH')
amountYesAmount in base units
senderYesHex-encoded sender address on source chain
to_chainYesDestination chain
from_chainYesSource chain (e.g. 'ethereum', 'solana')
hook_targetYesHex-encoded target contract address for the post-fulfillment hook on destination chain
hook_calldataYesHex-encoded calldata to execute on hook_target after bridge fulfillment
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description bears the burden of behavioral disclosure. It explains the hook execution sequence (after arrival, hook_target called with hook_calldata) and notes composability. No contradictions, but missing details on potential failures or prerequisites.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two succinct sentences, each adding value: first states the core action, second explains the hook mechanism and purpose. No wasted words, front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers the tool's purpose and hook behavior well but lacks details on return values, error handling, or preconditions. Given the 7 parameters and no output schema, it is reasonably complete but could be richer.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds context about the workflow but does not significantly enhance parameter understanding beyond the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Execute a bridge transfer' and the resource 'deBridge post-fulfillment hook', distinguishing it from sibling tools like bridge_tokens by emphasizing the hook mechanism and composability.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when a post-fulfillment hook is needed, with examples of composable operations. However, it does not explicitly state when not to use this tool or suggest alternatives like simple bridging tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cancel_taskAInspect

Cancel a pending or active task on the Tenzro Task Marketplace. Only the original requester can cancel. Refunds any escrowed TNZO. Returns cancellation confirmation.

ParametersJSON Schema
NameRequiredDescriptionDefault
task_idYesTask ID to cancel
requester_addressYesHex-encoded address of the task requester (for authorization)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses cancellation scope, refund, and confirmation but omits details on idempotency, error states, or side effects. Adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences, front-loaded with action and resource, no extraneous information. Each sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple cancellation with 2 parameters and no output schema, the description covers core aspects. Could improve by mentioning error conditions or state transitions, but overall sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers both parameters with clear descriptions. The description adds value by explaining requester_address is for authorization, reinforcing the 'original requester' constraint.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool cancels a pending/active task on a specific marketplace, with details on who can cancel (original requester) and what happens (refund, confirmation). This distinguishes it from siblings like complete_task or assign_task.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Specifies that only the original requester can cancel, providing a key usage condition. However, it does not explicitly mention alternatives or when not to use it, missing some guidance relative to sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cct_get_poolAInspect

Get a single TNZO CCT pool by chain name. Returns chain_id, chain_selector, pool_address, token_address, pool_type, contract_name, capacities, refill_rate.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainYesChain name (e.g. ethereum, base, arbitrum, optimism, solana)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Straightforward read operation; no annotations but description discloses return fields. No side effects or prerequisites needed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with essential information and return fields. No waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Simple get tool; description lists all output fields, sufficient for usage without output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage 100% with parameter description; tool description does not add extra meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'Get' and resource 'single TNZO CCT pool by chain name'. Distinguishes from 'cct_list_pools' sibling by specifying 'single'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when/when-not/alternatives. Implied by sibling name but not stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cct_list_poolsAInspect

List all TNZO CCT pools in the canonical mainnet registry (Ethereum LockRelease; Base/Arbitrum/Optimism/Solana BurnMint).

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It indicates a read-only operation but does not disclose response format, pagination, or rate limits. Basic purpose is transparent but lacks details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, front-loaded sentence with zero waste. Every word adds meaning.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and 0 parameters, description is adequate for a simple list tool. Specifying the chains is helpful. Could mention return format, but not critical.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters, so baseline is 4. Description adds value by specifying the chains and registry context, which is not in the schema. Schema coverage is 100% but irrelevant.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it lists all TNZO CCT pools, specifying verb 'List' and resource 'pools' with scope (canonical mainnet registry, specific chains). It distinguishes from sibling tool 'cct_get_pool' which lists a single pool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies use when needing a full list of pools. Although it doesn't explicitly exclude alternatives, the sibling name 'cct_get_pool' suggests differentiation. No explicit when-not guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

chat_completionAInspect

Send a chat completion request to a served AI model on the Tenzro network. Use list_models or list_model_endpoints to discover available models

ParametersJSON Schema
NameRequiredDescriptionDefault
modelYesModel ID or service instance UUID (alias: model_id)
messageYesThe user message to send
max_tokensNoMaximum tokens to generate (default 512)
temperatureNoTemperature (0.0-2.0, default 0.7)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so the description carries full burden. The description is minimal, lacking details on behavioral traits such as idempotency, response handling, error cases, rate limits, or authentication requirements. Essential context for an API call is missing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise (one sentence plus a suggestion) and front-loaded with the core action. Every word serves a purpose, though it sacrifices completeness. It's efficient but slightly underspecified.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having 4 parameters and no output schema, the description does not explain return values, synchronous vs streaming behavior, or error scenarios. For an AI completion tool, key operational context is missing, making it incomplete for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with each parameter already well-documented in the schema. The description adds no additional parameter information beyond 'model' and 'message' context. Baseline 3 is appropriate as schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('send a chat completion request') and the resource ('served AI model on the Tenzro network'). It also points to sibling tools (list_models, list_model_endpoints) for discovery, distinguishing it from other tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly advises using list_models or list_model_endpoints to discover available models, providing clear guidance on prerequisite steps. However, it does not mention when to avoid this tool or compare with alternatives like cortex_reason.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_complianceAInspect

Check whether a token transfer would be compliant with the registered ERC-3643 compliance rules. Returns compliant/non-compliant status with specific violation details. Does not execute the transfer.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesHex-encoded recipient address
fromYesHex-encoded sender address
amountYesAmount in base units
token_idYesToken ID (hex, 32 bytes) or symbol
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It states the tool is read-only (does not execute) and what it returns. Could be enhanced by mentioning prerequisites (e.g., registered compliance rules).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences, front-loaded with purpose. No unnecessary words. Efficiently conveys key information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, description covers purpose, behavior, and return type. Lacks error conditions or prerequisites, but sufficient for a straightforward compliance check tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with clear descriptions. Description adds context about ERC-3643 compliance, which ties parameters to that domain. Slightly above baseline 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the action (check compliance), the specific context (ERC-3643 token transfer), and the return value (status with violation details). It distinguishes from related tools like register_compliance and send_transaction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'does not execute the transfer', clarifying it is a pre-check tool. Provides clear context for when to use, but does not explicitly state when not to use or list alternative tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

close_payment_channelAInspect

Close a micropayment channel with the final balance. Requires sender signature on the final state. Settles remaining balance on-chain and returns any unused deposit.

ParametersJSON Schema
NameRequiredDescriptionDefault
channel_idYesPayment channel ID to close
final_balance_tnzoYesFinal balance owed to recipient in TNZO
sender_signature_hexYesHex-encoded signature from the channel sender authorizing the final balance
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses key behaviors: it closes a channel, settles balance on-chain, and returns unused deposit. However, it misses potential side effects like irreversibility or what happens if the channel is already closed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise—two sentences that convey all essential information without any fluff. It is front-loaded with the primary action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 3 parameters and no output schema, the description covers the action but lacks return value details (e.g., what exactly 'returns any unused deposit' means) and does not mention preconditions like the channel must be open. Could be more complete for a mutation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds minimal extra meaning beyond the parameter descriptions; it mentions 'final balance' and 'sender signature' but these are already clear from the schema. No additional semantics are provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's action (closing a micropayment channel), the resource, and key details like requiring sender signature and settling on-chain. However, it does not explicitly differentiate from sibling tools like 'open_payment_channel' or 'settle_payment', which would enhance clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides a prerequisite (requires sender signature) but lacks guidance on when to use this tool versus alternatives. No explicit when-to-use or when-not-to-use conditions are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

complete_taskAInspect

Mark a task as completed with a result payload. Optionally attach a proof of completion. Triggers settlement payment to the completing agent. Returns the completion receipt.

ParametersJSON Schema
NameRequiredDescriptionDefault
resultYesJSON result payload from task completion
task_idYesTask ID to mark as complete
agent_didYesDID of the agent that completed the task
proof_hexNoOptional proof of completion (hex-encoded)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided; description carries full burden. Discloses side effect (triggers settlement payment), optional proof, return of receipt. Doesn't cover error conditions or security implications but is adequate for basic usage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences with no fluff. Each sentence adds distinct value: purpose, optional parameter, side effect with return. Efficiently communicates core functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, but description mentions 'completion receipt'. Lacks error scenarios, authentication requirements, or idempotency. Good coverage of essential aspects for a mutation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with parameter descriptions. Description adds minimal semantic value beyond schema (e.g., 'JSON result payload', 'hex-encoded'). No deeper explanation of result structure or proof format.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'mark as completed' and resource 'task', includes result payload, optional proof, payment trigger, and receipt. Distinguishes from sibling task tools like assign_task, cancel_task, delegate_task.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description implies usage context (completing a task) but lacks explicit when-to-use vs alternatives, prerequisites, or when not to use. No guidance on required state of task.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cortex_reasonAInspect

Invoke Tenzro Cortex recurrent-depth reasoning. Executes a recurrent-depth transformer (OpenMythos-style) through a registered Cortex worker, charging TNZO based on tokens_in, tokens_out, and loops_used. Returns the reasoning output along with a signed CortexReceipt binding input/output commitments, weights hash, runtime hash, loops_used, and worker DID. Use tier='fast|standard|deep|institutional' to select the reasoning depth budget. Positioning: Cortex reasons. Praecise governs. Tenzro settles. (Praecise is an open AI governance framework by Ipnops — integrated with, but not owned by, Tenzro.)

ParametersJSON Schema
NameRequiredDescriptionDefault
tierNoReasoning tier: 'fast' (2-4 loops), 'standard' (8), 'deep' (16-32), 'institutional' (TEE + optional ZK). Default 'standard'
inputYesInput prompt / problem statement
model_idYesCortex model ID (e.g. 'mythos-3b'). Must be registered via tenzro_registerCortexWorker
max_loopsNoMaximum recurrent loops (overrides tier). Optional
min_loopsNoMinimum recurrent loops (overrides tier). Optional
requesterNoCaller address (20- or 32-byte hex). Optional
attestationNoAttestation requirement: 'none', 'tee', 'tee_and_zk'. Default inferred from tier
max_cost_tnzoNoMaximum cost in TNZO base units. Rejects if pricing exceeds. Optional
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description explains execution behavior: it charges TNZO based on token counts and loops, returns a signed receipt with commitments and hashes. However, it does not disclose whether the operation is read-only or destructive, nor any authorization or rate limit details. The cost and return format are well-covered.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (4 sentences plus a positioning line) and front-loaded with the primary action. The positioning line is somewhat meta but non-essential. Every sentence adds value, though the structure could be slightly improved.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (8 parameters, no output schema), the description explains the return value (CortexReceipt) but not its full structure. It lacks mention of prerequisites like model registration (only in schema), error conditions, or lifecycle details. This leaves some gaps for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with descriptions for all 8 parameters. The tool description adds extra context beyond schema: it defines the tier options with loop estimates and states that max_loops/min_loops override tier. This enriches the semantic understanding of the parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the verb 'Invoke' and resource 'Tenzro Cortex recurrent-depth reasoning', specifying it executes a recurrent-depth transformer through a registered worker. It differentiates from siblings by emphasizing the unique positioning of Cortex versus Praecise and Tenzro, and notes the charging mechanism and receipt return.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides guidance on tier selection ('Use tier=\'fast|standard|deep|institutional\' to select the reasoning depth budget') but does not explicitly state when to use this tool versus alternatives like chat_completion or other reasoning tools. No when-not-to-use or exclusion criteria are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_escrowBInspect

Create an on-chain escrow via a signed CreateEscrow transaction. Funds are locked at a deterministic vault address derived from the escrow_id; only the original payer can later release or refund.

ParametersJSON Schema
NameRequiredDescriptionDefault
payeeYesHex-encoded payee address (receives funds on release)
payerYesHex-encoded payer address (must match the signing key)
amount_tnzoYesAmount in TNZO to hold in escrow
timeout_secsNoTimeout duration in seconds (for timeout-based release)
release_conditionYesRelease condition: 'provider_signature' | 'consumer_signature' | 'both_signatures' | 'verifier_signature' | 'timeout' | 'custom'
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses that funds are locked and only the payer can release/refund, but omits important details like signing requirements, blockchain fees, or irreversibility. The mention of a signed transaction implies external signing but doesn't explain the process.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences long with no superfluous words. It front-loads the main action and adds key behavioral details in the second sentence.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, the description should explain return values (e.g., transaction hash), side effects, and prerequisites. It only mentions a signed transaction but does not clarify what the tool returns or how signing is handled, leaving significant gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage, and the description adds minimal extra meaning beyond the schema. It references 'escrow_id' which is not in the schema, potentially causing confusion. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses the verb 'create' and specifies the resource 'on-chain escrow' with clear details about the deterministic vault address and payer control. This distinguishes it from sibling tools like refund_escrow and release_escrow.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is given on when to use this tool versus alternatives (e.g., when to use release_escrow or refund_escrow). The description lacks context about prerequisites or conditions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_mpc_walletAInspect

Create a new MPC threshold wallet with configurable threshold and share count. Default is 2-of-3. Returns the wallet ID, address, and key share metadata.

ParametersJSON Schema
NameRequiredDescriptionDefault
key_typeNoKey type: 'ed25519' (default) or 'secp256k1'
thresholdNoNumber of shares required to sign (e.g. 2)
total_sharesNoTotal number of key shares (e.g. 3)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must bear the full burden. It states it creates a wallet and returns metadata, but does not disclose side effects (e.g., whether it overwrites existing wallets), required permissions, or error conditions. This is a notable gap for a creation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with no extraneous words. Essential information (purpose, default, return values) is front-loaded, achieving high information density.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description mentions return values (wallet ID, address, key share metadata) but lacks details on error handling, idempotency, or prerequisites. For a creation tool with 3 optional parameters and no output schema, additional context would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear parameter descriptions. The description adds context by mentioning the default 2-of-3 configuration, which relates to 'threshold' and 'total_shares'. However, it does not elaborate on 'key_type' or any constraints beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool creates an MPC threshold wallet with configurable threshold and share count, defaulting to 2-of-3, and lists return values (wallet ID, address, key share metadata). This differentiates it from sibling tools like 'create_wallet' (likely single-key) and other creation tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description implies this tool is for creating an MPC wallet, it does not explicitly state when to use it versus alternatives ('create_wallet', 'create_user_wallet'). No when-not-to-use or prerequisite conditions are mentioned, leaving the agent to infer usage from context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_nft_collectionAInspect

Create a new NFT collection on the Tenzro ledger. Supports ERC-721 (unique tokens) and ERC-1155 (semi-fungible tokens). Returns the collection ID and deployed address.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesCollection name (e.g. 'Tenzro Validators')
symbolYesCollection symbol (e.g. 'TVAL')
creatorYesHex-encoded creator address
base_uriNoOptional base URI for token metadata (e.g. 'https://metadata.tenzro.com/collection/')
standardYesNFT standard: 'erc721' or 'erc1155'
descriptionNoOptional collection description
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It confirms a write operation and return values, but lacks details on irreversibility, required permissions, failure modes, or effects on the ledger. Minimal behavioral context beyond the obvious.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no filler, all essential information (action, standards, return values) included efficiently. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 6-parameter write tool with no output schema, the description mentions the return values but lacks information on error handling, validation rules (e.g., unique name), or how the standard affects behavior. Adequate but could be more complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 6 parameters well. The description only adds 'Supports ERC-721 and ERC-1155', which is implicit in the 'standard' parameter. Baseline 3 is appropriate as the description adds little beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (create NFT collection), the platform (Tenzro ledger), supported standards (ERC-721/1155), and return value (collection ID and deployed address). This distinguishes it from sibling tools like list_nft_collections or mint_nft.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for creating collections but does not provide explicit guidance on when to use vs alternatives, prerequisites (e.g., need a creator address), or constraints like duplicate names. No exclusions or contrasts with similar tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_payment_challengeAInspect

Create a payment challenge for a protected resource. Supports five protocols:

  • 'mpp' (Machine Payments Protocol): Session-based streaming payments, ideal for per-token AI inference billing

  • 'x402' (Coinbase HTTP 402): Stateless one-shot payments, ideal for API calls and data downloads

  • 'visa-tap' (Visa Trusted Agent Protocol): RFC 9421 agent-verified payments for agentic commerce

  • 'mastercard-agent-pay' (Mastercard Agent Pay): KYA-verified payments with agentic tokens

  • 'native': Direct TNZO transfer on the Tenzro ledger

ParametersJSON Schema
NameRequiredDescriptionDefault
assetYesPayment asset: 'USDC', 'USDT', or 'TNZO'
amountYesPayment amount in smallest unit (e.g. 100 = 0.000100 USDC for x402, or TNZO base units for native)
protocolYesPayment protocol: 'mpp' (Machine Payments Protocol — session-based streaming), 'x402' (Coinbase HTTP 402 — stateless one-shot), or 'native' (direct TNZO transfer)
resourceYesResource URL or identifier being paid for (e.g. /api/inference, /api/data/query)
recipientYesHex-encoded recipient address
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears the full burden of behavioral disclosure. The description only names the action and protocols; it does not mention side effects (e.g., whether funds are committed, what a challenge entails), authentication requirements, idempotency, or error conditions. A creation tool should disclose more behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is a single, focused paragraph with a front-loaded purpose statement and a bullet-like list of protocols. Every sentence provides essential information without waste. It is appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has 5 required parameters and no output schema. The description adequately explains the protocols and their use cases, which is critical given the variety. However, it lacks information about return values (the challenge), error conditions, prerequisites (e.g., must the recipient be registered?), and any side effects. For a creation tool, more completeness is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds value by elaborating on the 'protocol' parameter, explaining the meaning and ideal use for each of the five protocols, which helps the agent select the correct value. This extra context justifies a 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

First sentence states specific verb+resource ('Create a payment challenge for a protected resource'), and the list of five protocols with use cases clearly distinguishes the tool from siblings like open_payment_channel or verify_payment.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides context for each protocol's ideal use (e.g., 'ideal for per-token AI inference billing', 'ideal for API calls and data downloads'), which implies when to use each variant. However, it does not explicitly state when to use this tool versus other payment-related tools (e.g., open_payment_channel, verify_payment), nor does it include 'when not to use' guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_proposalAInspect

Create a new governance proposal on the Tenzro Network. Requires a minimum staked balance to propose. Returns the new proposal ID and initial status.

ParametersJSON Schema
NameRequiredDescriptionDefault
titleYesProposal title
payloadNoOptional JSON payload for the proposal (e.g. parameter changes)
descriptionYesDetailed description of the proposal
proposal_typeYesProposal type: 'parameter_change', 'treasury', 'upgrade', 'text'
proposer_addressYesHex-encoded proposer address
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits. It mentions the pre-stake requirement and return value, but lacks details on side effects, permissions, error handling, or transaction costs. This is adequate but not comprehensive for a mutation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is brief (two sentences) and front-loaded with the primary purpose and key condition. Every word adds value without unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description appropriately mentions the return value (proposal ID and status). It also notes the staked balance requirement. However, it could be more complete by describing error conditions or how to meet the stake requirement.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so parameters are already well-documented. The description adds no additional semantic detail about individual parameters, meeting the baseline expectation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (create) and the resource (governance proposal on Tenzro Network), and implies a specific outcome (returns ID and status). It distinguishes from sibling tools like list_proposals and vote_on_proposal by specifying creation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description notes a key prerequisite (minimum staked balance), guiding the agent when to use the tool. However, it does not explicitly state when not to use it or name alternatives, leaving some ambiguity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_swarmBInspect

Create a swarm of coordinated agents under an orchestrator on the Tenzro Network. Each member spec spawns one child agent. Tasks can be dispatched to all members in parallel or sequentially.

ParametersJSON Schema
NameRequiredDescriptionDefault
membersYesJSON array of member specs: [{"name":"analyst","capabilities":["data"]}]
parallelNoDispatch tasks in parallel (default true)
max_membersNoMax number of swarm members (default 10)
orchestrator_idYesOrchestrator agent ID
task_timeout_secsNoTask timeout in seconds (default 300)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses parallelism/sequentiality and member-spec spawning, but omits behavioral details like permissions, idempotency, side effects on existing agents, or resource implications. Provides some transparency but insufficient for a full understanding.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two focused sentences with no redundancy. Directly states purpose, key behavior (spawning, parallelism), and network context. Efficient and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite 5 parameters and no output schema, the description is brief. It omits details on return value, error handling, auth requirements, lifecycle management, and integration with siblings like 'terminate_swarm' or 'get_swarm_status'. Agent lacks key context for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

All parameters have schema descriptions (100% coverage), so the description adds marginal value. The 'members' param is clarified by the phrase 'Each member spec spawns one child agent', and 'parallel' is already described in schema. No additional semantics beyond what schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Create a swarm of coordinated agents') and identifies the resource ('under an orchestrator on the Tenzro Network'). It distinguishes from sibling tools like 'spawn_agent' by emphasizing coordination and orchestration, but does not explicitly contrast with alternatives such as 'spawn_agent_from_template'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives like 'spawn_agent' or 'terminate_swarm'. The description implies usage for creating a multi-agent swarm but offers no when-not or context cues.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_tokenAInspect

Create a new ERC-20 token via the Tenzro token factory. Returns the deployed token address and token ID. The token is registered in the unified token registry and discoverable across all VMs.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesToken name (e.g. 'My Token')
symbolYesToken symbol (e.g. 'MTK')
creatorYesCreator address (hex, 20 or 32 bytes)
vm_typeNoTarget VM type: 'evm', 'svm', or 'daml' (default: 'evm')
decimalsNoToken decimals (default: 18)
signatureNoOptional hex-encoded signature over 'tenzro:create_token:{name}:{symbol}:{supply}' proving creator ownership (min 64 bytes)
descriptionNoOptional token description
permissionsNoToken permissions: 'mintable', 'burnable', 'pausable', 'freezable'
initial_supplyYesInitial token supply as a string (e.g. '1000000000000000000000')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description partially discloses behavior: it returns deployed address and ID, and the token is registered. However, it does not mention mutability, permission requirements, or side effects beyond registration. The destructive nature is implied but not explicit.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with no wasted words. It is front-loaded with the primary action and efficiently covers the outcome.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers the core action and returns, but lacks detail on return format (no output schema) and behavioral constraints like signatures or VM-specific behavior. Given the tool has 9 parameters and no annotations, the description is adequate but not fully comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

All 9 parameters have schema descriptions (100% coverage). The tool description does not add extra parameter-level context beyond what the schema provides. Therefore, it meets the baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'create' and the resource 'ERC-20 token' via the Tenzro token factory. It also specifies key outcomes (returns address and ID, registration). This differentiates it from sibling tools like 'create_nft_collection' and 'mint_nft'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

There is no explicit guidance on when to use this tool vs alternatives. No comparisons to other token creation methods or prerequisites are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_user_walletBInspect

Create a new user wallet under an application. Optionally fund it with an initial TNZO amount from the app's master wallet.

ParametersJSON Schema
NameRequiredDescriptionDefault
labelYesHuman-readable label for the user wallet
app_idYesApplication ID (UUID) the wallet belongs to
initial_fundingNoInitial TNZO funding amount (e.g. 10.0)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It mentions optional funding from the app's master wallet, which is a useful behavioral detail. However, it omits side effects (e.g., on-chain impact, permissions required, or whether the wallet is immediately active).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no redundancy. It efficiently conveys the core purpose and option. However, it lacks structured sections (e.g., parameter details, examples) that could aid comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema and three parameters, the description is too minimal. It does not explain the return value, error conditions, or prerequisites (e.g., app existence). The optional funding behavior is mentioned but not elaborated.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema covers all three parameters with descriptions (100% coverage). The tool description adds no additional meaning beyond the schema. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool creates a user wallet under an application, with optional initial funding. It distinguishes itself from siblings like 'create_wallet' and 'fund_user_wallet' by specifying the app context and funding source.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no explicit guidance on when to use this tool versus alternatives (e.g., 'create_wallet' or 'fund_user_wallet'). It implies usage for new user wallets under an app but lacks exclusion criteria or alternative suggestions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_walletCInspect

Create a new cryptographic keypair for use as a Tenzro wallet. Supports Ed25519 (Tenzro native) and Secp256k1 (EVM-compatible)

ParametersJSON Schema
NameRequiredDescriptionDefault
key_typeNoKey type: 'ed25519' (default, Tenzro native) or 'secp256k1' (EVM-compatible)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must bear the full burden. It states the keypair creation but does not disclose whether the keypair is returned, stored, or if additional steps are needed. Missing side-effect information.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no fluff. Could be improved by front-loading key actions, but acceptable for brevity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, so description should clarify return value (e.g., keypair data). It does not mention result or prerequisites like authentication or storage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the description repeats the same information about key_type options. No new semantic value added beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool creates a cryptographic keypair for a Tenzro wallet and specifies supported algorithms. However, it does not differentiate from sibling tools like generate_keypair or import_keystore, though the wallet context provides some distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives such as import_keystore or create_mpc_wallet. The description lacks explicit when/when-not context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_zk_proofAInspect

Create a Plonky3 STARK proof for one of the three Tenzro AIRs (inference, settlement, identity) over the KoalaBear field. Returns the hex-encoded bincode-serialized p3_uni_stark::Proof and public inputs (4-byte LE KoalaBear chunks).

ParametersJSON Schema
NameRequiredDescriptionDefault
witnessYesWitness fields as a JSON object of u64 field-element values. Per circuit: inference={model_checksum,input_checksum,computed_output}; settlement={service_proof,amount}; identity={private_key,capabilities,capability_blinding}
circuit_idYesCircuit identifier — one of 'inference', 'settlement', 'identity'
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description bears full responsibility. It discloses the output format (hex-encoded bincode-serialized Proof and public inputs with specific encoding) and the field type (KoalaBear), providing useful behavioral context beyond the input schema. However, it does not mention potential computational cost, latency, or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences: the first states the core action and scope, the second specifies the output format. It is front-loaded, concise, and contains no redundant information. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 2-parameter tool with no output schema, the description covers the input details and output format well. However, it could be more complete by mentioning whether the proof generation is synchronous or asynchronous, any performance expectations, or prerequisites like circuit existence. Overall, it is fairly complete but leaves some gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds value by enumerating the exact witness fields per circuit (e.g., inference requires model_checksum, input_checksum, computed_output), which is more detailed than the schema's generic description. This helps distinguish parameter usage across circuits.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool's action ('Create a Plonky3 STARK proof') and specifies the resource ('one of three Tenzro AIRs'). It clearly distinguishes itself from sibling tools like 'verify_zk_proof' by focusing on creation, and provides distinct allowed circuit values ('inference', 'settlement', 'identity').

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for proving but does not explicitly state when to use this tool versus alternatives (e.g., verify_zk_proof for verification). It also does not mention prerequisites, such as circuit registration or field configuration. While the circuit options are listed, no guidance on selecting between them is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

crosschain_burnAInspect

Burn tokens for a crosschain transfer (ERC-7802 crosschainBurn). Only pre-authorized bridges can call this. Tokens are burned on the local chain and will be minted on the destination chain. Returns the burn event and nonce.

ParametersJSON Schema
NameRequiredDescriptionDefault
fromYesHex-encoded address to burn from
amountYesAmount to burn in base units
bridgeYesHex-encoded authorized bridge address
destinationYesDestination chain identifier (e.g. 'ethereum', 'solana')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses authorization requirement and return (burn event and nonce). No annotations provided, so description carries full burden; could mention idempotency, failure modes, or gas costs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, front-loaded with action. Each sentence adds critical info: purpose, restriction, process, and return. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Describes core behavior and return, but lacks error conditions, prerequisites beyond authorization, or post-conditions. Adequate for a simple mutation, but could be more thorough.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and description adds no new details beyond what the schema already provides for each parameter. Baseline 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it burns tokens for crosschain transfer (ERC-7802 crosschainBurn). References the return value, distinguishing it from siblings like crosschain_mint and bridge_tokens.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Specifies that only pre-authorized bridges can call, and describes the burn-mint process. Does not explicitly list when to use vs alternatives, but context is sufficient.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

crosschain_mintAInspect

Mint tokens via an authorized crosschain bridge (ERC-7802 crosschainMint). Only pre-authorized bridges can call this. Tokens are minted on the local chain after being burned on the source chain. Returns the mint event and nonce.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesHex-encoded recipient address
amountYesAmount to mint in base units
bridgeYesHex-encoded authorized bridge address
senderYesHex-encoded sender address on the source chain (for event attribution)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses the authorization requirement, the mint-after-burn process, and that it returns a mint event and nonce. It does not detail error cases or gas costs, but the core behavioral traits are clearly stated.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences, front-loaded with the main action. Every sentence adds essential information (what, restrictions, flow, output). No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description mentions the return type (mint event and nonce). It does not explain event structure or error handling (e.g., what happens if bridge is unauthorized). Minor gaps, but overall sufficient for a mint function with clear parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description adds no new parameter details beyond what the schema already provides (e.g., 'Hex-encoded', 'base units'). The additional context about authorization is general, not parameter-specific.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'mint', the resource 'tokens via an authorized crosschain bridge', and references the ERC-7802 standard. It distinguishes from sibling tools like crosschain_burn and bridge_tokens by specifying the crosschain mint flow.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly notes that only pre-authorized bridges can call this, and describes the flow (burn on source, mint on local). However, it could be improved by mentioning prerequisites like authorization via authorize_crosschain_bridge or when not to use this tool versus alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cross_vm_transferAInspect

Transfer tokens between VMs (e.g., EVM to SVM). Uses the cross-VM bridge precompile for atomic transfers. Only TNZO is currently supported for cross-VM transfers.

ParametersJSON Schema
NameRequiredDescriptionDefault
to_vmYesDestination VM: 'evm', 'svm', 'daml', 'native'
tokenYesToken symbol or 'TNZO'
amountYesAmount as string (in native decimals)
from_vmYesSource VM: 'evm', 'svm', 'daml', 'native'
to_addressYesDestination address (hex)
from_addressYesSource address (hex)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description alone must disclose behavior. It mentions atomic transfers via a precompile, indicating reliability. However, it omits details like gas costs, error handling, address format requirements (hex implied by schema), and whether the transfer is synchronous. Provides minimal but essential behavioral insight.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two succinct sentences: purpose first, then technical detail and constraint. No redundant words. Front-loaded with the core action, making it easy for the agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 params, no output schema, many siblings), the description covers purpose, mechanism, and token limitation. However, it lacks details on return value, error cases, and address format, which are important for execution. Adequate but not thorough.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so baseline is 3. The description adds that only TNZO is supported, which clarifies the 'token' parameter beyond the schema's 'Token symbol or 'TNZO''. This adds marginal value, but overall the schema already sufficiently describes parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it transfers tokens between VMs (specific verb and resource), with an example (EVM to SVM). It distinguishes from sibling bridge tools by specifying cross-VM and using the cross-VM bridge precompile. The token limitation (only TNZO) adds precision, making the purpose unambiguous and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool vs. alternatives like bridge_tokens or wormhole_bridge. It does not state prerequisites, when-not-to-use, or success/failure scenarios. The constraint 'Only TNZO is currently supported' hints at a limitation but does not guide the agent in tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

debridge_create_txBInspect

Create a cross-chain transaction via deBridge DLN. Returns transaction data ready for signing and submission.

ParametersJSON Schema
NameRequiredDescriptionDefault
amountYesAmount in smallest unit (wei/lamports)
senderNoSender address on source chain
dst_tokenYesDestination token address
recipientYesRecipient address on destination chain
src_tokenYesSource token address
dst_chain_idYesDestination chain ID
src_chain_idYesSource chain ID (e.g. 1 for Ethereum)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It states 'Returns transaction data ready for signing and submission,' but does not explain auth needs, rate limits, side effects, or whether the tool is idempotent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with the core purpose. No wasted words, though an additional sentence on usage would improve it.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (7 parameters, 6 required) and lack of output schema or annotations, the description is minimal. It does not explain return format, error handling, or prerequisites like token approval.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with descriptions for all 7 parameters. The description does not add additional meaning beyond the schema, so a baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Create', resource 'cross-chain transaction', and context 'via deBridge DLN', distinguishing it from sibling tools like debridge_get_chains or debridge_same_chain_swap.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no explicit guidance on when to use this tool versus alternatives, nor does it mention prerequisites or exclusions. The context is implied but not stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

debridge_get_chainsAInspect

Get all blockchain networks supported by deBridge DLN for cross-chain transfers.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided; description carries full burden. It implicitly indicates a read-only operation ('Get') but does not explicitly declare idempotency or safety. For a simple data retrieval tool, this is acceptable.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence that front-loads the verb and resource. No wasted words; efficient and clear.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Tool has no parameters and no output schema; the description sufficiently explains what it does (list supported chains). No missing information for this simple tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has zero parameters with 100% coverage; description adds no further parameter info. Baseline of 3 is appropriate as the schema already conveys all relevant details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Get' and identifies resource 'blockchain networks supported by deBridge DLN'. Clearly distinguishes from sibling tools like `debridge_get_instructions` or `debridge_create_tx`.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives. While the purpose is clear, it does not mention exclusions or compare to other debridge tools, leaving the agent to infer usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

debridge_get_instructionsBInspect

Get deBridge operational instructions and guidance for cross-chain transfers.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, and the description only indicates a read-like operation. It does not disclose side effects, permissions, rate limits, or what constitutes 'instructions'.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, concise and front-loaded. Could benefit from slightly more structure (e.g., bullet points for guidance types), but remains efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adeguate for a simple, parameterless tool. However, it lacks detail on what the instructions contain, limiting comprehensiveness for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has zero parameters, so schema coverage is 100%. Baseline of 3 is appropriate as the description adds no parameter context, but none is needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it retrieves deBridge operational instructions and guidance for cross-chain transfers, distinguishing it from sibling tools like debridge_create_tx or debridge_get_chains.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives. The description implies its utility for getting instructions but does not specify prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

debridge_same_chain_swapBInspect

Execute a same-chain token swap via deBridge. Swaps tokens on the same blockchain without cross-chain bridging.

ParametersJSON Schema
NameRequiredDescriptionDefault
amountYesAmount of input token in smallest unit
senderNoSender address
chain_idYesChain ID for the swap
token_inYesInput token address
token_outYesOutput token address
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It lacks details on swap side effects, slippage, fees, or prerequisites like approvals. Only states basic action.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste, front-loaded. Every word serves a purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Tool is a financial swap with no output schema or annotations. Description fails to mention return values, error conditions, or safety measures like slippage, making it incomplete for safe usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers 100% parameters with descriptions, so baseline is 3. Description adds no extra meaning beyond what schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states 'Execute a same-chain token swap via deBridge' with specific verb and resource. It distinguishes from cross-chain bridging tools like bridge_tokens by highlighting 'same blockchain without cross-chain bridging'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description implies usage for same-chain swaps but does not explicitly state when to use or avoid this tool compared to siblings. No alternatives or exclusions mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

debridge_search_tokensAInspect

Search for tokens available on deBridge DLN. Returns token addresses, symbols, and supported chains.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesToken name, symbol, or address to search for
chain_idNoOptional chain ID to filter results (e.g. 1 for Ethereum, 56 for BSC)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are absent, so the description must cover behavioral traits. It does not mention idempotency, rate limits, authentication requirements, or whether it's a read-only operation. The only behavior disclosed is 'returns' data, which is already obvious from the purpose.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence of 12 words, yet conveys the core purpose and return information. No fluff or redundancy. It is front-loaded with the verb and resource.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple search tool with two parameters, the description covers the basics. However, it lacks details about pagination, maximum results, search fuzziness, error scenarios, or full return structure. Given no output schema, more completeness would be beneficial for an agent to know exactly what to expect.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Both parameters have descriptions in the schema (100% coverage). The description adds value by stating the tool returns 'token addresses, symbols, and supported chains', which clarifies the output schema and helps the agent understand the return format, even though no output schema is provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches for tokens on deBridge DLN and what it returns (addresses, symbols, chains). The verb 'search' and specific platform 'deBridge DLN' differentiate it from sibling tools like 'bridge_tokens' or 'debridge_create_tx'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. With over 100 sibling tools, including other token-related tools like 'list_tokens' or 'get_token_balance', the description does not specify when searching on deBridge is appropriate or when to use other tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

decode_resultAInspect

ABI-decode contract call return data. Takes hex-encoded return data and output type signatures, returns decoded values. Useful for interpreting EVM contract responses.

ParametersJSON Schema
NameRequiredDescriptionDefault
data_hexYesHex-encoded return data from a contract call
output_typesYesOutput type signatures (e.g. ['uint256', 'bool', 'address'])
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must carry burden. It discloses that it decodes and returns values, but does not mention side effects, error handling, or limitations (e.g., only EVM-based). Adequate but not detailed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose, no redundant words. Efficient and clear.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, so description should explain return format. Only says 'returns decoded values' without structure. For a simple tool, this may suffice but lacks detail on return type, especially given no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions. Description essentially restates schema: 'hex-encoded return data and output type signatures'. Adds minimal extra value beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states verb (ABI-decode), resource (contract call return data), and useful for interpreting EVM responses. Distinguishes from siblings like encode_function or decrypt_data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Says 'useful for interpreting EVM contract responses' but does not explicitly state when to use vs. alternatives or when not to use. No mention of prerequisites or edge cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

decrypt_dataAInspect

Decrypt AES-256-GCM encrypted data using the key and nonce. Returns the plaintext in hex.

ParametersJSON Schema
NameRequiredDescriptionDefault
key_hexYesHex-encoded 32-byte AES-256-GCM key
nonce_hexYesHex-encoded 12-byte nonce used during encryption
ciphertext_hexYesHex-encoded ciphertext
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description must bear the burden. It discloses the algorithm and output format but omits error conditions, security implications, or whether the operation is idempotent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single, concise sentence (16 words) that front-loads the main action with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description explains the return format (hex plaintext) but lacks depth on error handling, authentication tag verification, or relationships to sibling tools like encrypt_data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema parameter descriptions have 100% coverage, specifying hex encoding and byte sizes. The description adds minimal value beyond the schema, only mentioning 'key and nonce' generically.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Decrypt', specifies the algorithm 'AES-256-GCM', lists required inputs (key, nonce), and defines the output format 'plaintext in hex'. This is specific and distinguishes it from sibling tools like 'encrypt_data' or 'seal_data'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. With many sibling tools involving cryptography, explicit usage context is lacking.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

delegate_taskBInspect

Delegate a task from one agent to another on the Tenzro Network. Optionally set a TNZO budget cap for the delegated task. Returns the delegation record and task ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
taskYesTask description or task ID to delegate
delegate_didYesDID of the agent to delegate to
delegator_didYesDID of the delegating agent
max_budget_tnzoNoOptional maximum TNZO budget for the delegated task
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden of behavioral disclosure. It mentions the return value ('delegation record and task ID') but fails to disclose authentication needs, side effects (e.g., task reassignment), or whether the delegation is reversible. For a mutation tool, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences front-load the core action and options, with no redundant information. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite high schema coverage, the description omits behavioral context (e.g., required permissions, effect on original task) and does not address the complexity of delegation. The return format is vague ('delegation record'). With no output schema, more detail is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already describes all 4 parameters (100% coverage), so the description adds limited value. It notes the budget parameter is optional and that the return includes a record and task ID, but the schema already covers types and descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Delegate a task'), the resource ('from one agent to another'), and the context ('on the Tenzro Network'). It also mentions the optional budget cap and return value, making the purpose specific and distinguishable from siblings like assign_task.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no explicit guidance on when to use this tool versus alternatives like assign_task or post_task. It does not specify prerequisites, scenarios, or exclusions, leaving the agent to infer usage from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

delegate_voting_powerAInspect

Delegate governance voting power from one address to another. Delegated TNZO stake will count toward the delegate's votes. Returns the delegation record.

ParametersJSON Schema
NameRequiredDescriptionDefault
to_addressYesHex-encoded delegate address to receive voting power
amount_tnzoNoAmount of TNZO to delegate (e.g. 100.0). If omitted, delegates all staked balance.
from_addressYesHex-encoded delegator address
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the full burden. It mentions the effect on TNZO stake and return of delegation record, but does not disclose whether delegation replaces existing delegations or what permissions are required. Gaps remain.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is only two sentences, front-loaded with the primary action, and contains no unnecessary words. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description states the return value. It explains the basic operation but lacks edge cases (e.g., if delegation is additive or replacement) and prerequisites. Still, it is largely complete for a straightforward delegation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and parameter descriptions in the schema are already clear. The tool description adds no additional meaning beyond the schema, so baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'delegate' and the resource 'governance voting power', and distinguishes from siblings like 'set_delegation_scope' and 'get_voting_power' by specifying the action of transferring power between addresses.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description indicates when to use (to delegate voting power) but does not explicitly mention when not to use or provide alternatives. It provides clear context for the tool's purpose.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

delete_model_mcpAInspect

Delete a model from this node's local storage. The model must not be currently serving. Frees disk space. Returns deletion confirmation.

ParametersJSON Schema
NameRequiredDescriptionDefault
model_idYesModel ID to delete from local storage
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries the burden. It discloses that the operation frees disk space and returns confirmation, but does not detail error conditions (e.g., model not found) or reversibility.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences front-loaded with the core action. No wasted words; every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple 1-parameter tool with no output schema, the description covers the action, precondition, side effect, and return value. Missing only potential error handling or confirmation details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the parameter description is self-explanatory. The tool description reinforces the meaning of model_id but adds no additional nuance beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly identifies the action (delete), resource (model from local storage), and constraints (must not be serving). It distinguishes from sibling tools like list_models, download_model, serve_model_mcp, and stop_model.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states the prerequisite that the model must not be currently serving, guiding the agent on when it is appropriate to call. Does not mention alternatives but the context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

deploy_contractAInspect

Deploy a smart contract to the Tenzro ledger. Supports EVM (Solidity bytecode), SVM (BPF programs), and DAML (DAR packages). Returns the deployed contract address.

ParametersJSON Schema
NameRequiredDescriptionDefault
vm_typeYesTarget VM: 'evm', 'svm', or 'daml'
bytecodeYesContract bytecode (hex-encoded, with optional 0x prefix)
deployerYesDeployer address (hex)
gas_limitNoGas limit for deployment (default: 3000000)
signatureNoOptional hex-encoded signature over 'tenzro:deploy_contract:{vm_type}:{deployer}' proving deployer ownership (min 64 bytes)
constructor_argsNoABI-encoded constructor arguments (hex, optional)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully disclose behavioral traits. It only mentions that it returns the deployed address, but omits critical details like gas costs, state mutability, authorization requirements, or potential failure modes for a write operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, 20 words, front-loaded with the verb 'Deploy'. Every word adds value with no filler. Excellent conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description minimally covers purpose, supported VMs, and return value. Given the complexity of multi-VM contract deployment and no output schema, it lacks details on prerequisites, bytecode format expectations, or deployment results beyond the address. Adequate but not thorough.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with each parameter well-documented. The tool description does not add additional meaning beyond the schema, merely restating VM support. Baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Deploy' and names the resource 'smart contract to the Tenzro ledger'. It explicitly lists supported VMs (EVM, SVM, DAML), clearly distinguishing it from sibling tools like send_transaction or submit_daml_command.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description states what the tool does but provides no guidance on when to use it versus alternatives or when not to use it. There are no exclusions or comparison to similar tools, leaving the agent to infer usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

derive_keyAInspect

Derive a 256-bit encryption key from a password using Argon2id KDF. Returns the derived key in hex.

ParametersJSON Schema
NameRequiredDescriptionDefault
passwordYesPassword to derive a 256-bit key from using Argon2id
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the burden. It states the algorithm and output format but omits details like salt usage, memory/time parameters, or deterministic behavior. It does however accurately reflect the core operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two concise sentences (total 18 words) that instantly communicate the tool's action and result. No extraneous details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a cryptographic tool, the description lacks important context about salt handling, parameter defaults, and return format specifics (e.g., length of hex string). However, with a single parameter and clear output statement, it is minimally adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the schema description already explains the password parameter. The description adds no further parameter insights, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool derives a 256-bit encryption key from a password using Argon2id KDF and returns it in hex. It distinguishes itself from sibling tools like hash_sha256 by specifying the algorithm and output purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives (e.g., hash functions). It does not mention security considerations or that Argon2id is suitable for key derivation, not password hashing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

detectAInspect

Run object detection on an image. Returns an array of detections with bounding box (x0,y0,x1,y1), label_id, and confidence score. NMS-free for DETR-family models — just sigmoid + score threshold.

ParametersJSON Schema
NameRequiredDescriptionDefault
model_idYesRegistered detector id (e.g. 'rf-detr-base', 'rf-detr-nano', 'd-fine-l')
image_base64YesBase64-encoded image bytes
score_thresholdNoScore threshold (0.0–1.0). Detections below this are dropped. Default 0.25.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description adds key behavioral info: it returns detections with specific fields, and specifies that it is NMS-free for DETR-family models (sigmoid + score threshold). This goes beyond the input schema and helps the agent understand behavior, though it omits potential side effects or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with the main action front-loaded. Every sentence adds value—purpose, output format, and a key algorithmic detail. No redundant or extraneous words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool is moderately complex (3 params, no output schema). The description explains the output structure but lacks context on possible errors, image format requirements, or performance implications. Without output schema, the return format description is helpful, but completeness is only adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description does not add detail beyond the schema for model_id, image_base64, or score_threshold. It mentions score_threshold indirectly in the NMS context but does not elaborate on parameter usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it runs object detection on an image and describes the output format (array of detections with bounding box, label_id, confidence). This distinguishes it from sibling tools like segment or list_detection_catalog, which have different purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide guidance on when to use this tool vs alternatives (e.g., segment, transcribe). It lacks context on prerequisites (e.g., model must be available) and does not mention when not to use it. An agent receives no explicit decision support.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

detect_teeAInspect

Detect available Trusted Execution Environment (TEE) hardware on this node. Returns detected TEE type (Intel TDX, AMD SEV-SNP, AWS Nitro, NVIDIA GPU CC) or simulation mode.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description reveals that the tool returns a TEE type or simulation mode, but does not disclose side effects, permissions, or that it is read-only.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with key action, no fluff. Efficiently conveys purpose and return value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Description adequately covers what the tool does and returns for a simple detection tool, though lacks error/edge-case info; no output schema needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has zero parameters (100% coverage), so baseline is 4; description adds no parameter info, which is acceptable.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states verb 'detect' and resource 'TEE hardware on this node', specifying return values, and distinguishes from siblings like 'get_tee_attestation' by focusing on detection.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use or alternatives guidance; usage is implied by the description but not compared to siblings like 'detect' or 'get_tee_attestation'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_agentsAInspect

Discover registered AI agents on the Tenzro Network. Filter by capability or agent type. Returns agent DIDs, capabilities, endpoints, and reputation scores.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of agents to return (default 20)
agent_typeNoFilter by agent type: 'autonomous', 'assistant', 'validator', 'oracle'
capabilityNoFilter by capability (e.g. 'inference', 'settlement', 'bridge')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It states the return fields (DIDs, capabilities, endpoints, reputation scores) but does not explicitly disclose behavioral traits like idempotency, safety, or side effects (though it's likely read-only).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences: the first clearly states the core function, the second adds filtering and output details. It is concise, front-loaded, and every sentence contributes.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Though no output schema exists, the description lists what the tool returns (DIDs, capabilities, endpoints, reputation scores). It mentions filtering but omits details on pagination or limit defaults. For a simple discovery tool, it is largely complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for all parameters. The description reinforces the filtering purpose but does not add new meaning beyond the schema. Baseline of 3 applies because the description does not compensate for any gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb 'Discover' and resource 'registered AI agents', clearly stating the tool's function. It also mentions filtering and return fields, distinguishing it from sibling tools like 'discover_models' and 'list_agent_templates'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for discovering agents with optional filters, but does not explicitly state when to use or avoid this tool, nor mention alternatives. Given the large sibling set, some guidance would help.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_modelsAInspect

Discover available AI models on the Tenzro Network. Filter by category, serving status, or max price. Returns model IDs, providers, pricing, and endpoints.

ParametersJSON Schema
NameRequiredDescriptionDefault
categoryNoOptional model category filter: 'text', 'image', 'audio', 'multimodal'
serving_onlyNoOnly return models currently being served
max_price_tnzoNoMaximum price per 1k tokens in TNZO
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided; description lacks detail on read-only nature, side effects, auth needs, or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences, front-loaded with action and resource, no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Mentions return fields but omits pagination, sorting, or result limits; adequate for a simple discovery tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Adds semantic grouping ('serving status', 'max price') beyond schema descriptions, with 100% schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states discover AI models with filtering. However, does not explicitly differentiate from sibling list_models tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies use when filtering is needed, but gives no when-not-to-use or alternative comparisons.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

download_agent_templateAInspect

Download and instantiate an agent template. Creates a new agent identity from the template with optional configuration overrides. Returns the new agent DID and wallet.

ParametersJSON Schema
NameRequiredDescriptionDefault
template_idYesAgent template ID to download and instantiate
controller_didYesDID of the controller that will own the instantiated agent
config_overridesNoOptional JSON configuration overrides for the template
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses that it creates a new agent identity and returns DID/wallet, but lacks behavioral details such as idempotency, side effects on the template, permissions required, or error scenarios.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences convey the core action, outcome, and parameter hint. No unnecessary words. Front-loaded with the primary action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers the basic functionality and mentions return values (DID and wallet), which is helpful given no output schema. However, it omits details like error conditions, whether the template must exist locally, or relationship to other agent lifecycle tools. For a tool with no annotations and no output schema, some gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds little beyond the schema: it confirms config_overrides are optional and used to override template config, but the schema already specifies this. No additional meaning is provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it downloads and instantiates an agent template, creating a new agent identity. It specifies the action, resource, and outcome (returns DID and wallet), distinguishing it from siblings like get_agent_template or run_agent_template.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for creating agents from templates but does not provide explicit guidance on when to use this tool versus alternatives like spawn_agent_from_template or run_agent_template. No prerequisites or exclusions are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

download_modelAInspect

Download a model from the Tenzro model registry or HuggingFace Hub to this node's local storage. Performs SHA-256 integrity verification. Returns download status and progress.

ParametersJSON Schema
NameRequiredDescriptionDefault
model_idYesModel ID to download (e.g. 'gemma3-270m', 'qwen3.5-0.8b')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses SHA-256 verification and return of status and progress, but lacks details on side effects (e.g., overwriting existing models), authentication requirements, or whether the operation is asynchronous. No annotations provided to compensate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with action, no wasted words. Efficiently conveys core functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, so description should clarify return format. 'Returns download status and progress' is vague. Misses details on async behavior, error handling, and file locations, which are important for a potentially long-running download operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with clear parameter description. The description adds minimal value beyond the schema, only providing an example. Baseline score of 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool downloads a model from specific registries to local storage with integrity verification. It distinguishes itself from sibling tools like 'list_models' and 'get_download_progress' by its core action.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description implies when to use (when a model needs to be local), but lacks explicit guidance on prerequisites or when not to use. No mention of alternatives like 'serve_model_mcp' or 'download_agent_template'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

encode_functionAInspect

ABI-encode a function call. Takes a Solidity-style function signature and arguments, returns hex-encoded calldata. Useful for preparing EVM contract interactions.

ParametersJSON Schema
NameRequiredDescriptionDefault
argsYesFunction arguments as JSON array (e.g. ['0xabc...', '1000000'])
function_sigYesFunction signature (e.g. 'transfer(address,uint256)')
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It only states the input/output behavior but does not disclose whether the operation is read-only, has side effects, or requires permissions. Minimal behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences with no redundancy. Each sentence adds value: purpose, input/output, usage context. Efficiently front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, description explains that output is hex-encoded calldata, which is helpful. Missing error cases or formatting details, but for a low-complexity tool it is fairly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. Description adds 'Solidity-style' context and mentions JSON array format, but largely restates schema info. Adequate but not exceptional.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it ABI-encodes a function call for EVM, with specific verb 'ABI-encode' and description of inputs and outputs. Distinguishes from sibling encode/decode tools by focusing on function call encoding.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage for preparing EVM contract interactions but does not explicitly mention when not to use or provide alternatives. Could be improved by contrasting with other encoding tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

encrypt_dataBInspect

Encrypt data using AES-256-GCM symmetric encryption. Returns the ciphertext and nonce in hex.

ParametersJSON Schema
NameRequiredDescriptionDefault
key_hexYesHex-encoded 32-byte AES-256-GCM key
plaintext_hexYesHex-encoded plaintext data
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden for behavioral disclosure. It does not mention side effects (e.g., non-determinism due to random nonce), auth requirements, rate limits, or error behavior. It only states the operation and return values.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no wasted words. However, it could be slightly more structured (e.g., separate lines for purpose and output) to improve readability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description mentions the return values, which is necessary. However, it does not explain the nonce role, support for additional authenticated data (AAD), or possible error cases. For a simple encrypt tool, this is minimally adequate but lacks depth.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with clear descriptions for both parameters. The tool description adds context about the return format (ciphertext and nonce in hex), which is helpful but does not go beyond what the schema already provides in meaning.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'encrypt', the resource 'data', and the algorithm 'AES-256-GCM'. It also specifies the output format (ciphertext and nonce in hex). This distinguishes it from siblings like decrypt_data, seal_data, and hash functions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives such as seal_data or other encryption tools. There is no mention of prerequisites, when not to use it, or typical use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

erc8004_decode_get_agentAInspect

Decode the return data of an ERC-8004 IdentityRegistry.getAgent() eth_call into { registrationDataURI, owner }.

ParametersJSON Schema
NameRequiredDescriptionDefault
returndataYesHex-encoded return data from a getAgent(bytes32) eth_call
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses output structure ({ registrationDataURI, owner }) and input format (hex-encoded), though no error conditions mentioned.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no wasted words, front-loaded with key action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple decode tool with one parameter and no output schema, the description adequately covers purpose and result structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds no new detail beyond the input schema, which already describes the parameter. Schema coverage is 100%.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it decodes return data from ERC-8004 getAgent() into specific fields, distinguishing it from encoding siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use or when-not-to-use guidance, but the context implies it's for decoding getAgent() return data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

erc8004_derive_agent_idAInspect

Derive a deterministic ERC-8004 agent id from an owner address and salt (keccak256 domain-separated). Matches the on-chain IdentityRegistry computation.

ParametersJSON Schema
NameRequiredDescriptionDefault
saltYes32-byte salt as hex (0x-prefixed, 64 hex chars)
ownerYesAgent owner EVM address (0x-prefixed hex)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully relies on itself. It discloses deterministic behavior, the hashing algorithm, and domain separation, which gives good insight into the computation. However, it does not mention any side effects or response characteristics (e.g., error handling for invalid input).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two concise sentences with no redundancy. It is front-loaded with the main action and efficiently adds the key detail about on-chain matching.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description does not specify the format or type of the derived agent id (e.g., hex string, bytes32). For a complete understanding, an agent would need to infer the output from the tool name and context. Lacks clarity on return value.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema coverage is 100% with clear descriptions for both parameters (owner address, salt format). The description adds no additional semantic value beyond what the schema already provides, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'derive', the resource 'ERC-8004 agent id', and specifies inputs (owner address and salt) along with the hashing algorithm (keccak256 domain-separated). This distinguishes it from sibling encode tools and provides precise purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for on-chain computation by stating it matches the IdentityRegistry, but it lacks explicit guidance on when to use this tool versus alternatives like erc8004_encode_get_agent or other derive/encode tools. No direct comparison or conditionals provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

erc8004_encode_feedbackAInspect

ABI-encode an ERC-8004 ReputationRegistry.submitFeedback(agentId, score, feedbackAuthId, feedbackURI) call. Score is 0-100.

ParametersJSON Schema
NameRequiredDescriptionDefault
scoreYesFeedback score (0-100)
agent_idYesAgent ID (bytes32 hex, 0x-prefixed)
feedback_uriYesFeedback URI (e.g. ipfs:// or https:// link to full feedback JSON)
feedback_auth_idYesFeedback auth ID (bytes32 hex, 0x-prefixed) — binds the feedback to a task/response pair
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses the function signature and score range but omits the output type (encoded hex string) and any side effects. The description is factual but incomplete for a pure encoding tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the purpose and includes critical detail (score range). No extraneous words or redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having no output schema and no annotations, the description does not explain what the tool returns (e.g., ABI-encoded hex string). For a 4-parameter encoding tool, this omission significantly reduces completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, with each parameter having a description that includes format and constraints (e.g., bytes32 hex, score 0-100). The description adds the function signature but no new meaning beyond the schema. Baseline 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool ABI-encodes an ERC-8004 ReputationRegistry.submitFeedback call with specific parameters, distinguishing it from siblings like erc8004_encode_register. The verb 'encode' and resource 'ERC-8004 call' are clear.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not specify when to use this tool versus alternatives (e.g., other encoders or submitting feedback directly). Usage context is implied but not explicit; no 'when not to use' guidance is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

erc8004_encode_get_agentAInspect

ABI-encode an ERC-8004 IdentityRegistry.getAgent(agentId) call. Returns hex calldata for an eth_call lookup.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idYesAgent ID (bytes32 hex, 0x-prefixed)
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description fully carries the burden. It accurately describes the pure encoding operation and the return type, and no side effects or special behaviors are expected for this type of tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single 14-word sentence that is front-loaded with the action and resource, containing no waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, the description is nearly complete. It could explicitly state the output format (0x-prefixed hex), but the schema and overall context make it clear enough.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage and already documents the agent_id parameter as 'Agent ID (bytes32 hex, 0x-prefixed)'. The description adds no additional meaning beyond mentioning the function name, so it meets the baseline but does not exceed it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'encode', the specific resource 'ERC-8004 IdentityRegistry.getAgent(agentId) call', and the output 'hex calldata'. It distinguishes itself from the sibling 'erc8004_decode_get_agent' by specifying encoding.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains that the output is for 'an eth_call lookup', indicating off-chain use. It does not explicitly state when not to use it or list alternatives, but the purpose is clear enough for a specialized encoding tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

erc8004_encode_registerAInspect

ABI-encode an ERC-8004 IdentityRegistry.register(agentId, registrationDataURI, owner) call. Returns hex calldata for signing/broadcasting on any EVM chain.

ParametersJSON Schema
NameRequiredDescriptionDefault
ownerYesAgent owner EVM address (0x-prefixed hex)
agent_idYesAgent ID (bytes32 hex, 0x-prefixed)
registration_data_uriYesRegistration data URI (e.g. ipfs:// or https:// link to agent metadata JSON)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must supply behavioral context. It reveals the tool is a pure encoding operation (returns calldata, no side effects), but does not mention error handling, prerequisites (e.g., need for contract address), or any constraints. The output format is stated, but depth is limited.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two concise sentences that immediately convey the tool's purpose, parameters, and output. No redundant information; each sentence adds value. Front-loaded with key action and resource.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description explains the return value (hex calldata) adequately. It could be more complete by linking the output to downstream tools (e.g., sign_message or send_transaction) or mentioning that the calldata is ready for broadcast. However, the essence is sufficiently conveyed for an agent familiar with EVM interactions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema covers all three required parameters with descriptions, achieving 100% schema coverage. The description merely lists the same parameter names without adding extra meaning or usage nuances (e.g., format of registrationDataURI). Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (ABI-encode), the specific contract function (ERC-8004 IdentityRegistry.register), and the output (hex calldata). It explicitly lists the three parameters, which distinguishes it from sibling encode tools that target different functions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description indicates the output is for signing/broadcasting on any EVM chain, implying use cases like preparing transactions. However, it does not compare against other encode tools (e.g., erc8004_encode_feedback) or provide when-not-to-use guidance, leaving the agent to infer context from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

erc8004_encode_request_validationAInspect

ABI-encode an ERC-8004 ValidationRegistry.requestValidation(agentId, validatorId, requestURI, dataHash) call. Returns hex calldata.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idYesAgent ID being validated (bytes32 hex, 0x-prefixed)
data_hashYesData hash being validated (bytes32 hex, 0x-prefixed)
request_uriYesValidation request URI
validator_idYesValidator agent ID (bytes32 hex, 0x-prefixed)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description states it returns hex calldata, which implies a pure computation with no side effects. However, without annotations, it could explicitly mention that it is a read-only, stateless operation to aid agent understanding.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise (two short sentences) and immediately states the purpose and output. No unnecessary words or repetition.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For an encoding tool with no output schema and no annotations, the description covers the essential aspects: what function is encoded, parameters, and output format. Could mention that the output is Ethereum-compatible calldata, but the current text is sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema provides 100% coverage with descriptions for all parameters. The description adds value by linking the parameters to the function signature and clarifying their order, which is not explicit in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's action: ABI-encode a specific ERC-8004 function call. It names the function (requestValidation) and lists its parameters, distinguishing it from other encode tools in the sibling list.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus other encoding tools like erc8004_encode_register or erc8004_encode_submit_validation. The context of usage is only implied by the function name.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

erc8004_encode_submit_validationCInspect

ABI-encode an ERC-8004 ValidationRegistry.submitValidation(dataHash, response, responseURI, tag) call. Response is 0-100 quality score.

ParametersJSON Schema
NameRequiredDescriptionDefault
tagYesTag (bytes32 hex, 0x-prefixed) — domain separator
responseYesValidation response code (0-255, domain-specific)
data_hashYesData hash that was validated (bytes32 hex, 0x-prefixed)
response_uriYesResponse URI (link to full validation report)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description should disclose behavioral traits. It does not mention that this is a pure encoding function (no state mutation), nor any security or prerequisites. The description is insufficient for an agent to understand side effects or requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two sentences, no redundant information. It is appropriately front-loaded and easy to read, though it could be slightly more structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description is incomplete. It does not explain the return value (ABI-encoded bytes) or any necessary context like contract address invocation. An agent would lack critical information to use the output effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description adds some context by stating that response is a quality score (0-100), but this partly contradicts the schema's 0-255 range. It does not significantly enhance understanding beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly specifies that the tool ABI-encodes a ValidationRegistry.submitValidation call and lists the parameters. However, the mention 'Response is 0-100 quality score' slightly contradicts the schema which allows 0-255, causing minor confusion. It distinguishes from sibling encode tools by naming the specific function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide any guidance on when to use this tool versus other erc8004_encode_* siblings, nor does it mention exclusions or prerequisites. It only states its function without usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

exchange_tokenAInspect

RFC 8693 OAuth 2.0 Token Exchange. Mint a narrower child JWT from a parent JWT, bound to a different DPoP key with a strict subset of the parent's RAR grants and AAP capabilities. The child token's controller_did is set to the parent's sub, extending the act-chain by one hop. Subset enforcement is performed by the AS; over-scoped requests are rejected.

ParametersJSON Schema
NameRequiredDescriptionDefault
requested_rarYesRFC 9396 typed scope envelope the child should carry. Must be a strict subset of the parent's authorization_details. JSON object with `authorization_details: [...]` field.
subject_tokenYesParent JWT (the `subject_token` per RFC 8693). The AS validates its signature, exp, and revocation state before issuing the child token.
child_dpop_jktYesRFC 7638 JWK thumbprint of the child holder's Ed25519 public key. The child token is DPoP-bound to this key.
child_bearer_didYesDID that will be the `sub` of the new child JWT — typically a delegated agent or sub-agent in the act-chain
requested_ttl_secsNoOptional TTL override for the child token in seconds. Clamped to the engine's max_ttl_secs and the parent's remaining lifetime.
requested_aap_capabilitiesYesAAP `aap_capabilities` claim list — per-action constraints layered over RAR. Must be a subset of the parent's capabilities.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses key behaviors: subset enforcement by AS, rejection of over-scoped requests, act-chain extension, and child token's controller_did. This goes beyond typical descriptions, though it lacks error details and side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (a few sentences) and front-loaded with the RFC reference. Each sentence adds value, though it could be slightly more structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema is provided, but the description does not mention the return value (the child JWT). This is a missing piece for an agent to know what to expect. However, the tool's complexity is well covered otherwise.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the baseline is 3. The description provides context about subset relationships and enforcement, but does not add significant meaning beyond the schema's parameter descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly names 'RFC 8693 OAuth 2.0 Token Exchange' and states the specific action: mint a narrower child JWT from a parent JWT with a different DPoP key and subset of grants. This distinguishes it from sibling tools like introspect_token and others.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use (to create a child token with subset of permissions) and implies when not (over-scoped requests are rejected). It provides clear context but does not explicitly mention alternative sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

export_keystoreAInspect

Export a wallet's keystore as an encrypted JSON file. Uses Argon2id KDF for key derivation. The exported keystore can be imported on another node.

ParametersJSON Schema
NameRequiredDescriptionDefault
passwordYesPassword to encrypt the keystore file
wallet_idYesWallet ID (UUID) to export
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, but description discloses use of Argon2id KDF, adding some behavioral context. However, lacks details on side effects, permissions, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with no filler; key information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Simple tool with low complexity; description covers purpose, algorithm detail, and reimport use case, missing only minor behavioral details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and descriptions are present; the tool description adds minimal extra meaning beyond the schema for parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Export a wallet's keystore as an encrypted JSON file', including specific verb and resource. Distinguishes from sibling 'import_keystore' tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage for backup/migration by noting the keystore can be imported on another node, but no explicit when-to-use or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

file_insurance_claimAInspect

File an insurance claim against a bonded agent (Agent-Swarm Spec 9). The claim enters Open status awaiting governance adjudication; payout (if approved) is settled by a separate PayInsuranceClaim transaction. Returns the full ClaimRecord including the deterministic claim_id.

ParametersJSON Schema
NameRequiredDescriptionDefault
nonceYesNonce used to derive a deterministic claim_id
narrativeNoOptional narrative describing the harm (capped to 1024 bytes)
claimant_didYesClaimant DID (the harmed party)
receipt_refsNoReceipt references: tx hashes, settlement ids, log refs
amount_requestedYesRequested payout amount in wei (decimal string)
claimant_addressYesClaimant wallet address (hex). Receives payout if approved.
against_agent_didYesDID of the bonded agent the claim is filed against
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the burden. It discloses the claim lifecycle (Open status, separate payout) and the deterministic claim_id. However, it does not mention authorization requirements, rate limits, or any destructive potential beyond creating a new entity.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences long, each serving a distinct purpose: core action, process detail, return value. It is front-loaded with the key verb and resource, contains no redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description correctly notes the return of a 'full ClaimRecord' with a deterministic claim_id. It explains the initial status and the need for a separate payout transaction. However, it could provide more detail on post-filing steps or edge cases.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters. The description adds no extra parameter-level information beyond what is in the schema. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('File an insurance claim'), the resource ('against a bonded agent'), and the protocol reference ('Agent-Swarm Spec 9'). It distinguishes the tool from siblings by specifying the exact domain and context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains what happens when the tool is used (status becomes Open, separate payout transaction), but it does not explicitly state when to use it vs. alternatives or provide prerequisites. No sibling tool appears to be an obvious alternative, lowering the need for explicit contrast.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forecastAInspect

Run a univariate timeseries forecast on a registered model. Pass history (most-recent-last context series), horizon (steps ahead), and optional quantiles (e.g. [0.1, 0.5, 0.9]) and frequency_seconds. Returns point forecasts and (when supported) quantile bands.

ParametersJSON Schema
NameRequiredDescriptionDefault
historyYesUnivariate context series (most-recent-last). Non-empty.
horizonYesForecast horizon (steps ahead). Must be > 0.
model_idYesRegistered forecast model id (e.g. 'chronos-bolt-base', 'chronos-2', 'timesfm-2.5', 'granite-ttm-r2')
quantilesNoOptional output quantile levels in (0,1) (e.g. [0.1, 0.5, 0.9]). Defaults to model-native quantiles.
frequency_secondsNoOptional sampling frequency in seconds (used by frequency-aware models like Granite-TTM)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully convey behavioral traits. It explains that the tool returns point forecasts and quantile bands, and that quantile support may vary ('when supported'). This informs the agent about conditional behavior. However, it does not discuss auth requirements, rate limits, or potential errors like model unavailability. The mention of 'frequency-aware models' adds useful context for model-specific behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is only two sentences, immediately stating the main purpose. Every clause earns its place: the first sentence states the action and lists key parameters; the second describes the output. No redundant or filler content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given there is no output schema, the description adequately covers return values (point forecasts and quantile bands). All five parameters are mentioned, and required vs. optional status is clear. The description could mention that 'model_id' should be selected from available models (via sibling tools), but this is implied. Overall, it provides sufficient information for an agent to use the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds value by explaining the ordering of 'history' (most-recent-last), the role of 'horizon' (steps ahead), and that 'quantiles' are optional and default to model-native quantiles. It also clarifies that 'frequency_seconds' is 'used by frequency-aware models like Granite-TTM', which is not in the schema. This extra context helps an agent choose appropriate values.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Run a univariate timeseries forecast') and identifies the resource ('registered model'), clearly stating the tool's function. It lists required and optional parameters, leaving no ambiguity about what the tool does. In a server with many forecast-related siblings (e.g., list_forecast_models, unload_forecast_model), the description uniquely identifies this as the execution tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description indicates when to use the tool (after selecting a model) by referencing 'registered model' and the optional frequency_seconds for frequency-aware models. However, it does not explicitly state when not to use it or mention alternatives like 'list_forecast_models' for model selection. Usage context is implied but not fully articulated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forget_identityAInspect

TDIP/GDPR Article 17 right-to-erasure. Hard-deletes a previously revoked identity from the registry and persistent storage. The DID must already be in Revoked status — call revoke_did (RPC) first, allow cascading revocation to propagate, then call this. Distinct from revoke (logical delete).

ParametersJSON Schema
NameRequiredDescriptionDefault
didYesDID to resolve (e.g. did:tenzro:human:uuid or did:tenzro:machine:controller:uuid)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses hard-deletion and requirement for prior revocation. No annotations provided, so description carries full burden. Missing details on authentication, error outcomes, or effect on related data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, all essential information front-loaded. No unnecessary words. Efficient and clear.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple deletion tool with one parameter. Covers purpose, precondition, and differentiation. Lacks details on return value or side effects, which would be helpful for completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema covers the 'did' parameter fully. Description adds critical context that the DID must be in 'Revoked' status, which is not in schema. Baseline 3 +1 for added meaning.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool performs hard-deletion of a revoked identity, referencing GDPR Article 17. It distinguishes from logical delete ('revoke'), making the purpose unmistakable.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly instructs that the DID must be in 'Revoked' status and to call 'revoke_did' first. Contrasts with 'revoke' tool. Lacks explicit 'when not to use', but the precondition effectively guides usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

freeze_addressAInspect

Freeze an address for a specific token under ERC-3643 compliance. A frozen address cannot send or receive the specified token. Returns the freeze record.

ParametersJSON Schema
NameRequiredDescriptionDefault
reasonYesReason for freezing the address
addressYesHex-encoded address to freeze
token_idYesToken ID (hex, 32 bytes) or symbol
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the full burden. It discloses the core behavioral effect (address cannot send/receive) and that it returns a freeze record, but does not state whether the action is reversible, required permissions, or any side effects beyond the stated constraint.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with the action and compliance context. Every word contributes meaning: action, resource, effect, and return value. No extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description mentions a return record but does not detail its structure. Lacks prerequisites (e.g., token existence, not already frozen) and cross-references to related tools. Adequate but not comprehensive for a mutation tool with no annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with each parameter adequately described. The description adds no further parameter semantics beyond hinting at the return value. Baseline score of 3 applies as schema already provides sufficient detail.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Freeze'), the resource (address for a specific token), the compliance context (ERC-3643), and the effect (cannot send/receive). It distinctly defines what the tool does without ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for compliance freezing but does not provide explicit when-to-use or when-not-to-use guidelines. No alternatives or exclusions are mentioned, relying on the agent to infer context from the compliance framing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fund_user_walletBInspect

Fund a user wallet from the app's master wallet. Transfers TNZO from the master address to the user address.

ParametersJSON Schema
NameRequiredDescriptionDefault
amountYesAmount of TNZO to transfer (e.g. 5.0)
user_addressYesHex-encoded user wallet address (destination)
master_addressYesHex-encoded master wallet address (source of funds)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description should detail behavioral traits. It only states the transfer action without revealing prerequisites (e.g., sufficient master balance), outcome (e.g., transaction hash), or error cases. This leaves significant gaps for an AI agent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, immediately states the purpose, and has no unnecessary words. It is efficient and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite low complexity, the description lacks completeness for a financial transfer tool. It omits expected behavior like balance checks, fees, return value, and error handling, which are crucial for agent decision-making with no output schema or annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds no extra meaning beyond the schema's parameter descriptions, which are already adequate. No unique semantic value from the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'fund' and identifies the resource (user wallet from master wallet) and token (TNZO). It distinguishes itself from sibling tools like 'transfer_nft' or 'send_transaction' by specifying the source and destination roles.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for funding a user wallet from the master wallet but provides no explicit when-to-use or when-not-to-use instructions, nor mentions alternatives like 'send_transaction' or 'transfer_nft'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_keypairCInspect

Generate a new cryptographic keypair. Returns the public key, private key, and derived address in hex.

ParametersJSON Schema
NameRequiredDescriptionDefault
key_typeYesKey type: 'ed25519' or 'secp256k1'
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses return values but lacks information about side effects, determinism, security implications, or required permissions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

One sentence, efficient. No extraneous information. Could benefit from structured format, but acceptable.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, the description covers the main purpose and return values. However, it omits details like key format, address derivation algorithm, or that the key is newly generated (already implied). Adequate but not thorough.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (one parameter documented). The description adds no additional meaning beyond the schema. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the action 'Generate' and the resource 'cryptographic keypair', and specifies the output. It does not explicitly differentiate from sibling tools like 'derive_key', but the purpose is unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives (e.g., 'derive_key'). No context on prerequisites or preferred scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_vrf_proofAInspect

Generate a Tenzro VRF proof (RFC 9381 ECVRF-EDWARDS25519-SHA512-TAI). Returns the public key, 80-byte proof, and 64-byte deterministic output. Do not use with secret inputs — the try-and-increment encoding is data-dependent.

ParametersJSON Schema
NameRequiredDescriptionDefault
alphaYesHex-encoded input message (alpha). Use public data: block hash, request ID, NFT mint nonce
secret_keyYesHex-encoded 32-byte VRF secret key (Ed25519-compatible seed)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses that the algorithm uses data-dependent encoding and warns against secret inputs, which is good. However, it does not mention whether the tool is read-only or has side effects, nor does it address authentication or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences packed with essential information: first sentence covers purpose, algorithm, and returns; second sentence provides critical usage warning. No wasted words; front-loaded with key details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations or output schema, the description explains the return format (public key, proof, output) and includes algorithm specifics and a data-dependent encoding warning. It is relatively complete for a two-parameter tool, though it could mention that the operation is a pure computation without state changes.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with both parameters described. The description adds extra context for 'alpha' by suggesting valid public data types (block hash, request ID, NFT mint nonce). This goes beyond the schema, providing actionable guidance for parameter selection.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it generates a Tenzro VRF proof, specifies the exact algorithm (RFC 9381 ECVRF-EDWARDS25519-SHA512-TAI), and lists the return components (public key, 80-byte proof, 64-byte deterministic output). This is highly specific and distinguishes the tool from siblings like verify_vrf_proof.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides an important usage warning ('Do not use with secret inputs — the try-and-increment encoding is data-dependent'), but does not explicitly state when to use this tool versus alternatives (e.g., verify_vrf_proof or generate_keypair). The guidance is useful for safety but lacks comparative context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_agent_bondAInspect

Inspect an AgentBond by agent DID. Returns lifecycle state (Active / Cooldown / Frozen / Slashed / Returned), amount, controller, cooldown_until, last_modified_block, and promotion eligibility. Returns null if no bond exists.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_didYesAgent DID to look up
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It lists return fields and mentions null return, but fails to disclose whether the operation is read-only, required permissions, or any side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences, no unnecessary words, and critical information is front-loaded—immediately states the purpose and lists return values.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple lookup tool with one parameter, the description adequately covers what is returned and the null case, but lacks any usage context or prerequisites like authentication.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with one parameter described as 'Agent DID to look up'. The description adds no extra meaning beyond the schema, meriting the baseline 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Inspect') and resource ('AgentBond') and explicitly distinguishes from sibling tools like post_agent_bond by stating the lookup by agent DID.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies this tool is for reading bond information, but does not provide explicit guidance on when to use it versus alternatives like post_agent_bond or other get tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_agent_jwkAInspect

Resolve a single agent JWK by RFC 9421 keyid. Accepts did:tenzro:... (first compatible key) or did:tenzro:...#fragment (specific key). Mirrors GET /.well-known/jwks.json/:keyid.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyidYesRFC 9421 keyid — `did:tenzro:...` (first compatible key) or `did:tenzro:...#fragment` (specific key)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It implies a read-only operation by 'Resolve' and 'Mirrors GET', but does not explicitly confirm idempotency, side effects, or authentication requirements. The description is adequate but minimal for behavioral disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loading the purpose and following with input format and reference. Every sentence adds value, no filler, and it is efficiently structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, the description is complete. It explains what it does, how to specify the keyid, and references the standard endpoint. No critical information is missing.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 100% description coverage, with the parameter description repeating the tool description. The tool description adds no new semantic value beyond the schema, as both explain the two forms of keyid. Baseline score of 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it resolves a single agent JWK by keyid, specifying accepted formats (did:tenzro... or with fragment). However, it does not explicitly differentiate from the sibling tool 'list_agent_jwks', which likely lists all JWKs, so the distinction is implied but not stated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains the input format (keyid as did:tenzro or with fragment) and references a standard GET endpoint. It does not provide explicit guidance on when to use this tool versus alternatives like list_agent_jwks, nor does it mention any prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_agent_templateAInspect

Get details about a specific agent template in the Tenzro Agent Marketplace. Returns template configuration, capabilities, pricing, and usage statistics.

ParametersJSON Schema
NameRequiredDescriptionDefault
template_idYesAgent template ID to retrieve
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided; description implies read-only operation but does not detail any behavioral traits (e.g., auth requirements, rate limits). Adequate for a simple getter.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences, front-loaded with purpose, no redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity (1 param, no output schema), the description is complete and sufficient for an agent to understand and use the tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The single parameter (template_id) is fully described in the schema; the tool description adds no additional meaning. Baseline 3 due to high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves details of a specific agent template, listing what it returns (configuration, capabilities, pricing, statistics). It distinguishes from siblings like list/search/download.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this vs alternatives, but the context implies it's for fetching specific template details rather than browsing lists.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_agent_template_statsAInspect

Get statistics for an agent template on the Tenzro Network marketplace. Returns total spawns, average rating, and total number of ratings.

ParametersJSON Schema
NameRequiredDescriptionDefault
template_idYesAgent template ID to get stats for
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description clearly states what the tool returns (three specific statistics), implying it is a read-only operation. No behavioral quirks or side effects are mentioned, but for a straightforward stats retrieval, this is sufficient and does not contradict any annotations (none provided).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that front-loads the core action and outcome. Every word adds value; there is no redundancy or unnecessary detail.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one required parameter, no output schema, and no annotations, the description is complete. It specifies the return fields and the action, leaving no ambiguity about what the tool does.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear description for 'template_id'. The tool description adds no additional meaning beyond the schema, so it meets the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: retrieving statistics for an agent template, specifically naming the returned fields (total spawns, average rating, total number of ratings). This distinguishes it from related tools like 'list_agent_templates' or 'rate_agent_template'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide explicit guidance on when to use this tool versus alternatives (e.g., 'list_agent_templates'). The use case is intuitive for viewing stats, but no when-not or prerequisite information is given, which would be helpful given the large number of sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_balanceBInspect

Get the TNZO token balance of an account on the Tenzro ledger

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesHex-encoded account address (with or without 0x prefix)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears full burden. It states only 'Get balance' with no disclosure of behavioral traits like read-only nature, authentication needs, or side effects. The read-only aspect is implied but not explicit.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no unnecessary words, directly states the tool's purpose. Perfectly concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists and the description omits return format (e.g., balance value, unit). While the tool is simple, an agent lacks full context for expected output, making it incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear description for 'address'. The tool description adds no extra semantic value beyond what the schema provides, so baseline of 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the verb 'Get', the resource 'TNZO token balance', and the scope 'of an account on the Tenzro ledger'. It clearly distinguishes from sibling tools like 'get_token_balance' and 'token_balance' by specifying the token type.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for querying TNZO balance but provides no explicit guidance on when to use this tool versus alternatives, nor any prerequisites or exclusions. The context is minimal.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_blockAInspect

Get a block by height from the Tenzro ledger, including transactions and metadata

ParametersJSON Schema
NameRequiredDescriptionDefault
heightYesBlock height to retrieve
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It only indicates a read operation via the verb 'get' but does not mention authentication, rate limits, or potential side effects. The burden is unmet.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, 13 words, front-loading the core purpose. Every word earns its place, with no fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple one-parameter tool and no output schema, the description adequately states what is included ('transactions and metadata') but could briefly mention return structure (e.g., block hash, timestamp) for full completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear description for the 'height' parameter. The tool description adds no additional meaning beyond what the schema already provides, so baseline is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get'), the resource ('a block by height'), and the source ('from the Tenzro ledger'), with specific content ('including transactions and metadata'). This distinguishes it from sibling tools like 'get_block_range' and 'get_transaction'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the tool is for retrieving a single block by height but does not explicitly provide when to use it vs alternatives like 'get_block_range' or 'get_transaction'. The context is clear but lacks explicit exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_block_rangeAInspect

Fetch a contiguous range of blocks by height (default 64, max 256). Returns block summaries plus next_height and more_available so a lagging client can paginate forward until caught up.

ParametersJSON Schema
NameRequiredDescriptionDefault
end_heightYesLast block height to fetch (inclusive)
max_resultsNoMaximum number of blocks to return (default 64, capped at 256)
start_heightYesFirst block height to fetch (inclusive)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses return format (block summaries, pagination fields) and range constraints (64 default, 256 max). No annotations provided, but description adequately covers the fetching behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with no filler, front-loaded with action and parameter limits, then adds return value context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Explains pagination mechanism and return values, satisfying core use case. Lacks error handling or prerequisite details, but sufficient for a straightforward fetch tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers 100% of parameters with descriptions; description re-iterates default and max but adds minimal extra meaning beyond schema details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it fetches a contiguous block range by height with default and max limits. Distinguishes from sibling get_block (single block) but does not explicitly contrast.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Describes default/max block count and pagination behavior (next_height, more_available) but does not specify when to prefer this over alternatives like get_block or how to handle edge cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_bridge_routesAInspect

Get available bridge routes between two chains, including estimated fees, time, and which adapter handles the route

ParametersJSON Schema
NameRequiredDescriptionDefault
dest_chainYesDestination chain
source_chainYesSource chain (e.g. 'tenzro', 'ethereum', 'solana')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully disclose behavior. It mentions that the tool returns estimated fees, time, and which adapter handles the route, but does not clarify if it is read-only, idempotent, or has side effects. Some behavioral context is provided but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence with 16 words. It front-loads the main action and includes key details without unnecessary content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 2 simple parameters and no output schema, the description does a good job summarizing what is returned (routes with fees, time, adapter). However, it could be more explicit about the output structure, pagination, or error handling to be fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear descriptions for both parameters (source_chain and dest_chain). The description adds an example for source_chain (e.g., 'tenzro'), which adds marginal value beyond the schema. Baseline score is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and the resource 'available bridge routes', and specifies included information (estimated fees, time, adapter). It effectively distinguishes this from sibling tools like bridge_quote and bridge_tokens.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for fetching route options before bridging, but does not explicitly state when to use this tool versus alternatives (e.g., when to use bridge_quote). No exclusion criteria or when-not-to-use guidance is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_download_progressAInspect

Check the download progress of a model. Returns bytes downloaded, total size, percentage complete, and estimated time remaining.

ParametersJSON Schema
NameRequiredDescriptionDefault
model_idYesModel ID to check download progress for
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so the description carries the burden. It discloses the return fields (bytes, total size, percentage, time remaining), but does not mention error scenarios, prerequisites (ongoing download), or any side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences, front-loaded with the action and followed by output details. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple one-parameter tool with no output schema, the description adequately covers the purpose and return values. However, it lacks context about when calling this is appropriate (e.g., after starting a download) and error handling.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (the single parameter is described). The description does not add extra parameter information beyond the schema; it restates the parameter's purpose. Baseline score 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Check' and the resource 'download progress of a model', listing specific return fields. It distinguishes this tool from siblings like download_model and stop_model.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives. It is implied that this should be called after initiating a download, but no exclusions or conditions are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_eventsAInspect

Query historical events from the Tenzro ledger. Supports cursor-based pagination. Filter by block range, event type, and involved addresses. Returns events ordered by sequence number.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of events to return (default 50, max 200)
to_blockNoMaximum block height (optional)
addressesNoFilter by involved addresses (hex, optional)
from_blockNoMinimum block height (optional)
event_typesNoEvent types to filter: 'transfer', 'mint', 'burn', 'stake', 'bridge', 'settlement', 'nft_transfer', 'compliance' (optional, returns all if omitted)
from_sequenceNoStart from a specific event sequence number for cursor-based pagination
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must convey behavioral details. It mentions ordering by sequence number and cursor-based pagination, which adds value, but does not disclose whether the operation is read-only, any rate limits, or data freshness. The description partially compensates for missing annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with three sentences, each adding distinct value. It front-loads the core purpose and efficiently covers pagination, filtering, and ordering without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complex context with many sibling tools and no output schema, the description adequately covers the tool's functionality: querying, filtering, paginating, and ordering events. It could elaborate on return format, but the schema already details parameters well.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and each parameter has a description. The description adds context about cursor-based pagination, ordering, and the overall filtering capability, providing meaning beyond what the schema alone conveys.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it queries historical events with filtering and pagination. However, it does not distinguish itself from similar query tools like get_block_range or get_transaction, which also operate on ledger data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving and filtering events but does not explicitly state when to use this tool versus alternatives. No when-not or alternative recommendations are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_fee_marketAInspect

Inspect the EIP-1559 fee market: current effective gas price (base fee + suggested tip), suggested priority tip, and recent base-fee history. Use this to size maxFeePerGas / maxPriorityFeePerGas on Type-2 transactions. Base fee adjusts ±12.5% per block based on parent gas usage vs. the 15M target.

ParametersJSON Schema
NameRequiredDescriptionDefault
blocksNoNumber of recent blocks to summarize in fee history (1..=1024, default 10)
reward_percentilesNoTip percentiles to request, e.g. [25.0, 50.0, 75.0]; pass [] or omit to skip percentile sampling
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses key behaviors: returns effective gas price, suggested tip, base-fee history, and notes that base fee adjusts ±12.5% per block. No additional behavior needed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, each adding value: purpose, usage guidance, and behavioral detail. No wasted words, front-loaded with the core function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Although no output schema, the description adequately explains what is returned. It is complete for a simple inspection tool, though the format of history could be elaborated.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions for blocks and reward_percentiles. The description adds no further parameter-specific information, meeting the baseline but not exceeding it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Inspect the EIP-1559 fee market' and lists specific data returned (gas price, tip, history). It differentiates from sibling tools which are mostly unrelated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description gives explicit guidance: 'Use this to size maxFeePerGas / maxPriorityFeePerGas on Type-2 transactions.' It does not mention when not to use or alternatives, but the context is sufficiently clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_key_sharesAInspect

Get the key share metadata for an MPC wallet. Returns the threshold, total shares, and share indices without exposing secret key material.

ParametersJSON Schema
NameRequiredDescriptionDefault
wallet_idYesWallet ID (UUID) to query key shares for
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

States that the tool does not expose secret key material, which is a useful behavioral note. However, it does not disclose side effects, idempotency, error handling, or permissions required. With no annotations, more behavioral detail would be beneficial.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with the purpose, followed by return value summary. No unnecessary words, each sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Describes return values (threshold, total shares, share indices) without an output schema. Missing details on error conditions or pagination, but adequate for a simple getter tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear description for wallet_id. The description adds no additional semantics beyond the schema, warranting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Get the key share metadata for an MPC wallet', specifying the action (get), resource (key share metadata), and context (MPC wallet). Differentiates from sibling tools like create_mpc_wallet or rotate_keys by targeting metadata retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives (e.g., other wallet queries). Does not mention prerequisites (e.g., wallet existence) or conditions under which it should be avoided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_nft_infoAInspect

Get information about an NFT collection or a specific token within a collection. If token_id is provided, returns token-level details (owner, URI). If omitted, returns collection-level info (name, symbol, total supply).

ParametersJSON Schema
NameRequiredDescriptionDefault
token_idNoToken ID within the collection (optional — if omitted, returns collection-level info)
collection_idYesCollection ID (UUID)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description discloses behavior via two return modes (owner/URI vs name/symbol/supply). It implies a read operation but does not explicitly state no side effects or permission requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences: first states purpose, second explains conditional behavior. No redundant or extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given two parameters and no output schema, the description covers both modes and example return fields. Missing explicit error conditions or permissions, but adequate for a simple info tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (both params described), so baseline is 3. The description adds value by detailing what each mode returns (owner, URI vs name, symbol, total supply) and clarifying the optional token_id behavior.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves info about an NFT collection or specific token, distinguishing between two modes based on token_id. It is specific with verb ('Get') and resource ('NFT collection or token'), and contrasts with sibling tools like list_nft_collections and get_token_info.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains conditional usage: use token_id for token-level details, omit for collection-level. It implies when to use, but does not explicitly compare to alternatives like list_nft_collections or get_token_info.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_node_statusAInspect

Get the current status of the Tenzro node including health, block height, peer count, uptime, and role

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavior. It states the tool retrieves current status (non-destructive read) and lists returned fields. However, it does not mention any potential side effects, authorization needs, or response format, which would improve transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence of 15 words, front-loaded with the action and resource. No extraneous information, every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description lists the fields to expect but omits the output format (e.g., JSON object). For a simple status tool with no input and no output schema, it is mostly adequate but could specify the data type or structure for completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters, and the description implicitly confirms no input is needed. Since schema coverage is 100%, the description adds value by listing the data fields. No param details are necessary, but a note about no parameters would be slightly better.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool retrieves the current status of a Tenzro node, listing specific metrics like health, block height, peer count, uptime, and role. This distinguishes it from sibling tools that focus on other entities (e.g., get_swarm_status, get_provider_stats).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for checking node health but does not explicitly state when to use this tool versus alternatives like get_swarm_status. No guidance on conditions or prerequisites is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_provider_pricingAInspect

Get the current pricing configuration for a provider node. Returns the price per 1k tokens, minimum charge, and last updated timestamp.

ParametersJSON Schema
NameRequiredDescriptionDefault
provider_addressYesHex-encoded provider address
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden for behavioral disclosure. It only describes outputs, not potential errors (e.g., provider not found), authentication needs, or whether the operation is safe. This lack of detail limits transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that directly states the purpose and outputs. It is concise, front-loaded, and free of unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple getter with one parameter and no output schema, the description covers the basics. However, it omits context such as prerequisite (provider must exist), error conditions, and read-only nature, leaving some gaps for a fully informed agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already describes the parameter as 'Hex-encoded provider address' (100% coverage). The description adds no extra meaning about the parameter, aligning with the baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves pricing configuration for a provider node, then lists three specific return fields. This distinguishes it from sibling tool 'set_provider_pricing' which is for updating pricing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not explicitly state when to use this tool versus alternatives. It is implied from context and sibling names (set_provider_pricing) that this is for reading, but no direct guidance is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_provider_scheduleAInspect

Get the availability schedule for a provider node. Returns the configured hours, days, and timezone when the provider accepts inference requests.

ParametersJSON Schema
NameRequiredDescriptionDefault
provider_addressYesHex-encoded provider address
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided. Description states it returns schedule data, but does not explicitly confirm read-only nature, authentication needs, or error handling. Adequate but minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences, front-loaded with purpose. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately describes what the tool returns for a simple get operation with one parameter. Lacks mention of possible errors or valid input formats, but not critical for clarity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with parameter description. Tool description adds no extra meaning beyond what schema provides. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states verb 'Get' and resource 'availability schedule for provider node'. Mentions return values (hours, days, timezone). Distinguishes from sibling 'set_provider_schedule'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implied usage as a simple getter, but no explicit when-to-use or when-not-to-use. No mention of alternatives like list_providers or get_provider_pricing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_provider_statsAInspect

Get provider statistics including served models, inference count, staking info, and earnings. Omit address to get stats for the local node.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressNoHex-encoded provider address (with or without 0x prefix). If omitted, returns stats for the local node
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full disclosure burden. It reveals that the tool is read-only and returns a set of statistics, but does not mention potential side effects, authentication needs, error handling, or data freshness. The description is adequate but not rich in behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no fluff. The first sentence defines purpose and scope; the second provides usage insight. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description lists four categories of data returned, which gives an agent sufficient context to decide on invocation. Without an output schema, this partial enumeration is helpful. However, it could be more complete by hinting at the structure or additional fields.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% for the only parameter (address). The description repeats the schema's note about omitting for local node, adding no new information. As per guidelines, when schema coverage is high, the baseline is 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb 'get' and resource 'provider statistics', enumerating the exact data categories: served models, inference count, staking info, and earnings. It clearly distinguishes from sibling tools like get_provider_pricing and get_provider_schedule, and even provides nuance about local vs remote nodes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions when to omit the address (for local node stats), but does not provide explicit guidance on when to use this tool versus alternatives (e.g., get_provider_pricing or get_usage_stats). The usage context is implied but not fully specified.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_skill_usageBInspect

Get usage statistics for a registered skill on the Tenzro Network. Returns total invocations and last used timestamp.

ParametersJSON Schema
NameRequiredDescriptionDefault
skill_idYesSkill ID to get usage stats for (e.g. 'openclaw-tenzro', 'solana-defi')
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the full burden of behavioral disclosure. While it states the output (invocations, timestamp), it does not disclose whether the operation is read-only, requires authentication, has rate limits, or any other side effects. The description is minimal and insufficient for safe usage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise: one sentence with two clauses that front-load the primary action and immediately describe the return values. No redundant words. Every part earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, the description is minimally adequate. It states the purpose and the expected output shape (two scalar values). However, it lacks context on prerequisites (skill must be registered) and response format details, which could lead to ambiguity. Comparable to a 'get_balance' tool that might also need no further explanation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already provides full coverage (100%) with a clear description and example for the single parameter 'skill_id'. The description adds context about the return values but does not further clarify parameter usage beyond what the schema provides. Baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the action ('Get'), the resource ('usage statistics'), and the scope ('for a registered skill on the Tenzro Network'). It also specifies the return values ('total invocations and last used timestamp'), distinguishing it from sibling tools like 'get_tool_usage' which likely operates on tools rather than skills.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, nor does it mention any prerequisites (e.g., the skill must be registered). It only states what it does, leaving the agent to infer appropriateness without explicit context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_spending_limitsAInspect

Get the current spending limits for a wallet. Returns the daily limit, per-transaction limit, and current daily usage.

ParametersJSON Schema
NameRequiredDescriptionDefault
wallet_idYesWallet ID (UUID) to query spending limits for
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that the tool performs a read operation and returns specific data, but does not mention potential errors, authorization requirements, or side effects. For a simple getter, this is fairly transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loading the purpose and then detailing the return values. There is no unnecessary information, and it is appropriately sized for a simple getter tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description explains the return values (daily limit, per-transaction limit, current daily usage) despite no output schema. It adequately covers the behavior for a simple query, though it lacks details on data types or error conditions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema covers the single parameter 'wallet_id' with a clear description. The tool description does not add extra meaning beyond the schema. With 100% schema coverage, the baseline is 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and the resource 'spending limits for a wallet', and specifies the returned fields. It distinguishes itself from the sibling tool 'set_spending_limits' which does the opposite operation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when needing to check spending limits or current usage, but it does not explicitly state when to use this tool vs. alternatives like 'set_spending_limits' or prerequisites. It provides clear context but lacks explicit when-not-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_svm_cross_vm_program_infoAInspect

Return the canonical Tenzro Cross-VM SVM-native program ID and 4 instruction discriminators (bridge_to_evm, bridge_from_evm, register_token_pointer, transfer_cross_vm). Use this to construct SVM Instructions targeting the Tenzro Cross-VM native program from any SVM client.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden, and it discloses the return values (program ID and discriminators). Being a read-only info tool, this is sufficient; additional behavioral details like side effects are not needed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no waste: first states the output, second gives usage guidance. Front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no parameters, no output schema, and a simple informational purpose, the description fully covers what an agent needs to know to use the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters and 100% coverage, so baseline 4 applies. The description adds value by explaining the output, but no parameter details are necessary.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns the canonical Tenzro Cross-VM program ID and four named instruction discriminators, which is specific and distinguishes from sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says to use this tool to construct SVM instructions targeting the Tenzro Cross-VM native program, providing clear usage context, though it doesn't mention when not to use it or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_swarm_statusAInspect

Get the current status of a Tenzro agent swarm including lifecycle status, member count, and per-member agent statuses, roles, and results.

ParametersJSON Schema
NameRequiredDescriptionDefault
swarm_idYesSwarm ID to query
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations, so description carries full burden. Indicates read-only behavior ('Get') but does not explicitly state no side effects or permission requirements. Acceptable for a straightforward status query.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, front-loaded sentence with zero fluff. Every word adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, but description lists expected return fields (lifecycle, member count, per-member statuses, roles, results), providing sufficient context for a simple read tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and description implies the parameter (swarm_id) is used to identify the swarm. Description adds no extra semantic beyond what the schema provides, meeting baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the verb 'get' and resource 'status of a Tenzro agent swarm', and lists included fields (lifecycle status, member count, per-member statuses, roles, results). Distinguishes from sibling tools like create_swarm and terminate_swarm.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implicit context for when to use (checking swarm status) but no explicit when-not-to or alternatives. For a simple query tool this is adequate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_taskAInspect

Get details about a specific task on the Tenzro Task Marketplace. Returns task description, status, requester, assigned agent, quotes, and completion data.

ParametersJSON Schema
NameRequiredDescriptionDefault
task_idYesTask ID to retrieve
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It states what the tool returns but does not explicitly mention that it is read-only or disclose side effects, permissions, or error conditions. For a simple query tool this is minimally adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences: the first states the purpose, the second lists returned fields. It is concise and front-loaded with no redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (one param, predictable behavior), the description is fairly complete. It enumerates returned data types. However, it does not mention potential error scenarios or confirm that the task must exist, which would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has one parameter with a clear description. The description does not add extra meaning beyond what the schema provides, but schema coverage is 100%, so the baseline of 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves details for a specific task on the Tenzro Task Marketplace, listing key return fields. It distinguishes itself from sibling tools like list_tasks (which returns a list) and assign_task (which modifies task state).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: when you need detailed information about one task. It does not explicitly mention exclusions or alternatives, but the context given is sufficient for an agent to distinguish it from other task-related tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_tee_attestationAInspect

Generate a TEE attestation report from the node's hardware enclave. The attestation proves the code is running in a genuine TEE. Optionally specify the TEE type or auto-detect.

ParametersJSON Schema
NameRequiredDescriptionDefault
tee_typeNoTEE type: 'tdx', 'sev-snp', 'nitro', 'nvidia-gpu', or 'auto' (detect best available)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It fails to disclose error behavior (e.g., if TEE not available or type unsupported), permissions required, or side effects. The mention of 'genuine TEE' is positive but insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences that front-load the action ('Generate a TEE attestation report') and then provide optional details. No redundant or extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (single optional param, no output schema), the description is adequate to understand its purpose. It could mention potential failure modes or report format, but the current text is sufficient for basic use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with one optional parameter 'tee_type' fully described. The description adds 'Optionally specify the TEE type or auto-detect', which is consistent with the schema but does not add new meaning beyond the schema description itself.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it generates a TEE attestation report from the hardware enclave and proves genuine TEE execution. This distinguishes it from sibling tools like 'detect_tee' and 'verify_tee_attestation'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions optional TEE type specification or auto-detect, but does not provide explicit guidance on when to use this tool versus alternatives (e.g., verify_tee_attestation for verification, detect_tee for detection). Usage is implied but not clearly contrasted.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_token_balanceBInspect

Get the TNZO balance for an address across all VMs. Shows native balance (18 decimals), EVM wTNZO balance (18 decimals), SVM wTNZO balance (9 decimals), and DAML holding amount.

ParametersJSON Schema
NameRequiredDescriptionDefault
tokenNoToken symbol (default: 'TNZO')
addressYesAddress to query (hex)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It only describes outputs, not behavioral traits like read-only safety, authorization needs, or rate limits. The verb 'Get' implies read-only but is not explicit.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently conveys the tool's purpose and outputs, though it could be slightly more structured. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema is provided, so the description should clarify the return format. It lists what is shown but not the structure (e.g., whether it's a single number or an object with keys). Context is sufficient for a simple query but lacks error details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for both parameters. The description adds no extra meaning beyond what the schema provides; it focuses on outputs rather than parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (Get) and resource (TNZO balance across all VMs), and lists specific outputs, distinguishing it from siblings like 'get_balance' or 'token_balance'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus other balance-related tools. The context is implied by the multi-VM focus, but no exclusions or alternatives are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_token_infoBInspect

Get information about a token by symbol, token ID, or EVM address. Returns the full token definition including cross-VM addresses.

ParametersJSON Schema
NameRequiredDescriptionDefault
symbolNoToken symbol (e.g. 'TNZO')
token_idNoToken ID (hex, 32 bytes)
evm_addressNoEVM contract address (hex)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description indicates it returns full token definition including cross-VM addresses, implying a read operation. However, with no annotations, it lacks details on side effects, latency, or prerequisites. Adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no fluff. Front-loaded with purpose and outcome. Highly concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, and description only vaguely mentions return content. For a lookup tool with many siblings, more detail on return fields would improve completeness. Adequate for a simple lookup but not fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions for each parameter. The description adds no extra meaning beyond what the schema provides, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool gets token information by symbol, token ID, or EVM address, and returns the full definition including cross-VM addresses. It distinguishes itself from list-like tools but does not explicitly differentiate from similar tools like introspect_token.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus siblings (e.g., list_tokens, introspect_token). The context does not explain conditions for choosing this tool over alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_tool_usageBInspect

Get usage statistics for a registered tool on the Tenzro Network. Returns total invocations and last used timestamp.

ParametersJSON Schema
NameRequiredDescriptionDefault
tool_idYesTool ID to get usage stats for (e.g. 'tenzro-solana-mcp', 'tenzro-ethereum-mcp')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits. It states what the tool returns but does not mention authentication requirements, whether it is read-only, or any side effects. It is minimally adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two concise sentences with no superfluous information. It is front-loaded with the action and resource.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, the description covers the purpose and return format. It could mention that the tool must be registered on the network, but this is implied.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, and the schema already provides a good example for the only parameter. The description adds value by explaining the tool's output but does not enhance parameter semantics beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get usage statistics') and the resource ('for a registered tool'), and specifies the return values. However, it does not explicitly differentiate from sibling tools like 'get_usage_stats', though the per-tool focus is implicit.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives such as 'get_usage_stats' or 'get_skill_usage'. The description lacks context about prerequisites or usage boundaries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_transactionAInspect

Look up a transaction by its hash on the Tenzro ledger, returning type, sender, recipient, amount, status, and block height

ParametersJSON Schema
NameRequiredDescriptionDefault
tx_hashYesHex-encoded transaction hash (with or without 0x prefix)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It only states what the tool returns and does not mention whether it is read-only, any rate limits, idempotency, or error handling. This leaves gaps in understanding key behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence that efficiently conveys the tool's purpose and output. Every word adds value, and there is no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description lists all expected return fields (type, sender, recipient, amount, status, block height) even without an output schema. It covers the essential context for using the tool. Minor omission of error or null handling prevents a perfect score.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (one parameter described in schema). The description adds no additional meaning beyond the schema's 'Hex-encoded transaction hash (with or without 0x prefix)'. Baseline 3 is appropriate as the schema already documents the parameter thoroughly.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's action ('Look up a transaction by its hash'), the specific ledger ('Tenzro ledger'), and the returned fields (type, sender, recipient, amount, status, block height). This level of detail distinguishes it from sibling tools like 'get_block' or 'get_balance'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context (having a transaction hash) but provides no explicit guidance on when to use this tool versus alternatives, nor does it mention any prerequisites or exclusions. Without a search_transaction sibling, the implicit context is acceptable but not explicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_usage_statsCInspect

Get usage statistics for an application. Returns total transactions, gas spent, active users, and wallet count.

ParametersJSON Schema
NameRequiredDescriptionDefault
app_idYesApplication ID (UUID) to get usage stats for
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It does not mention that this is a read-only operation, potential errors (e.g., invalid app_id), rate limits, or authentication requirements. The return fields are listed, but deeper behavior is missing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently conveys purpose and return data. It is concise and easy to read, though slightly more structure (e.g., separating purpose and returns) could improve scannability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given one parameter, no output schema, and no annotations, the description covers the basic purpose and return fields. However, it lacks error handling info, output format, and behavioral details, which could be important for an agent to use correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the parameter description in the schema already explains app_id. The description adds no extra meaning beyond the schema, meeting the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and the resource 'usage statistics for an application', and lists specific return fields (transactions, gas, users, wallet count). This differentiates it from other get_* tools, though it could explicitly mention scoping to a specific app.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like get_provider_stats or get_skill_usage. The description does not mention prerequisites, exclusions, or when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_voting_powerAInspect

Get the governance voting power of an address. Returns the staked TNZO balance used as voting weight. Delegated power is included.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesHex-encoded address to query voting power for
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so description carries full behavioral burden. It discloses output nature (staked TNZO balance including delegated power) but lacks details on side effects (none), authentication requirements, or rate limits. Adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no superfluous words. Information is front-loaded and each sentence adds value: first defines purpose, second details output content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple, single-parameter query tool with no output schema, the description adequately explains what is returned. Missing details on potential errors or chain context, but overall sufficient given the tool's simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% for the single parameter, with a clear description in the schema ('Hex-encoded address'). The description adds output context but does not enhance parameter understanding further. Baseline score is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool retrieves governance voting power for an address, specifying verb ('get'), resource ('voting power'), and additional context (staked TNZO, delegated power). This effectively distinguishes it from siblings like delegate_voting_power or vote_on_proposal.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description implies usage context (querying voting power) but does not explicitly state when to use this tool versus alternatives or provide any exclusions. Sibling tools exist for delegation and voting, but no comparative guidance is given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hash_keccak256AInspect

Compute the Keccak-256 hash of hex-encoded data. Returns the 32-byte hash in hex. Used for Ethereum-compatible hashing.

ParametersJSON Schema
NameRequiredDescriptionDefault
data_hexYesHex-encoded data to hash
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It states the output format (32-byte hex), but does not mention that invalid hex input will cause an error or that the operation is deterministic and side-effect free.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, each purposeful: first defines operation and input, second defines output and use case. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool is simple (one param, no output schema). The description covers what the tool does, input/output formats, and a key use case. No significant gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description restates the input format ('hex-encoded') and adds output context, but adds no new semantic detail beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool computes a Keccak-256 hash from hex-encoded data, with a specific verb ('Compute') and resource. It distinguishes from sibling hash_sha256 by naming the algorithm and Ethereum compatibility.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description gives context for when to use ('Ethereum-compatible hashing'), but does not explicitly exclude alternatives like hash_sha256. However, the sibling name implies the alternative.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hash_sha256AInspect

Compute the SHA-256 hash of hex-encoded data. Returns the 32-byte hash in hex.

ParametersJSON Schema
NameRequiredDescriptionDefault
data_hexYesHex-encoded data to hash
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description fully discloses the tool's behavior: it computes a hash from hex data and returns a hex string. As a pure function, no side effects or restrictions are needed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence that immediately conveys the tool's purpose and output. Every word is essential; no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple, stateless hashing tool with one parameter and no output schema, the description fully specifies the input requirement and output format, leaving no ambiguity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema already describes the parameter (hex-encoded data to hash). The description adds value by specifying the return format (32-byte hash in hex), which is not in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (Compute), the resource (SHA-256 hash), and the input/output format (hex-encoded data, 32-byte hex hash). It distinguishes from sibling hash_keccak256 by specifying SHA-256.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for SHA-256 hashing and requires hex input, but does not explicitly mention when to use this tool over alternatives like hash_keccak256 or provide any exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

import_keystoreAInspect

Import a wallet from an encrypted keystore JSON. Decrypts with the provided password and adds the wallet to the local node.

ParametersJSON Schema
NameRequiredDescriptionDefault
passwordYesPassword to decrypt the keystore file
keystore_jsonYesJSON-encoded keystore data
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries the full burden. It states the tool decrypts the keystore and adds the wallet to the local node, indicating mutation. However, it lacks details on failure modes (e.g., incorrect password), idempotency, or side effects, limiting transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with purpose and immediate behavioral context. No extraneous words; each sentence earns its place, making it highly concise and structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with two parameters and no output schema, the description covers the core purpose and behavior. However, it lacks completeness regarding success/failure behavior, persistence, or handling of duplicate wallets, leaving some gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear parameter descriptions. The tool description does not add additional semantic value beyond 'decrypts with provided password' and 'JSON-encoded keystore data', so the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool imports a wallet from an encrypted keystore JSON, with specific verb 'import' and resource 'wallet from encrypted keystore JSON'. This distinguishes it from similar tools like 'export_keystore' (export) and 'create_wallet' (create new), making purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context: the tool is used when you have an encrypted keystore JSON and password. It implicitly differentiates from alternatives by mentioning 'encrypted keystore' and 'adds wallet to local node', but does not explicitly state when not to use it or list alternatives, missing the top score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

introspect_tokenAInspect

RFC 7662 OAuth 2.0 Token Introspection. Ask the AS whether a token is currently active and, if so, return its full claim set (RAR authorization_details, AAP aap_* claims, cnf, controller_did, etc.). Per RFC 7662 §2.2 a failed validation returns {active: false} with no other fields — the AS does not leak why the token is inactive.

ParametersJSON Schema
NameRequiredDescriptionDefault
tokenYesJWT to introspect. The AS returns `{active: true, ...claims}` on success or `{active: false}` per RFC 7662 §2.2 if the token is unknown, expired, or revoked.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses that failed validations return only `{active: false}` without leaking reasons, and successful ones return full claims. This is good behavioral transparency, though it could mention idempotency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the standard reference and purpose, and each sentence adds value. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple one-parameter tool with no output schema, the description covers behavior on success and failure. Could further clarify token type (OAuth2 access token), but is largely complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the description adds no additional parameter-level meaning beyond what the schema already provides. It repeats the token purpose but does not introduce new semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs RFC 7662 OAuth 2.0 Token Introspection, checking token activeness and returning claims. It lists specific claims like RAR authorization_details and aap_* claims, distinguishing it from other token-related tools like exchange_token.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for token introspection but does not explicitly state when to use it over alternatives. It lacks guidance on when not to use it or mention of siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

join_as_participantAInspect

Join the Tenzro Network as a MicroNode — a zero-install full participant. Auto-provisions a TDIP decentralized identity (DID) and MPC wallet. Grants access to all 10 network capabilities: inference, payments, agent collaboration, MCP tools, task execution, chain queries, smart contracts, TEE services, cross-chain bridge, and governance. Works from any entry point: Claude, ClawBot, MCP, A2A, SDK, API, CLI, or app. No hardware, no binary installation required.

ParametersJSON Schema
NameRequiredDescriptionDefault
originNoEntry point: 'mcp' | 'claude' | 'clawbot' | 'a2a' | 'sdk' | 'api' | 'cli' | 'app'
display_nameNoDisplay name for this participant (human-readable, optional)
participant_typeNoParticipant type: 'human' | 'agent' | 'bot'
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must carry the full burden. It mentions auto-provisioning and capabilities but does not disclose potential side effects, idempotency, or permission requirements. Lacks deep behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two succinct sentences: first defines the main action, second lists capabilities. No wasted words, front-loaded effectively.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 3 parameters and no output schema, the description explains the tool's purpose and capabilities but omits expected return values or confirmation messages. Adequate but not complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, baseline 3. The description adds no additional meaning beyond what the input schema provides for the three parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it joins the Tenzro Network as a MicroNode, auto-provisions DID and wallet, and grants access to all 10 network capabilities. It distinguishes the tool's core action effectively.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide explicit guidance on when to use this tool versus alternatives. It lists entry points but fails to explain when joining is appropriate or what prerequisites exist.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_agent_jwksAInspect

List all Tenzro agent public signing keys as an RFC 7517 JWK Set. Mirrors GET /.well-known/jwks.json. External RFC 9421 verifiers (Visa TAP, Mastercard, Stripe MPP, AP2, x402) use this to resolve keyid parameters.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Only states it lists keys; no disclosure of side effects, authorization needs, rate limits, or response size. Minimal behavioral context beyond purpose.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences: first states core function, second adds context. No wasted words, front-loaded with key information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter, no-output-schema tool, description covers purpose, standard compliance, and audience. Lacks details on potential limits or errors but sufficient for a simple list operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters, so schema covers everything. Baseline score of 4 applies since description adds no parameter info but none is needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool lists all agent public signing keys as a JWK Set, mirrors a standard endpoint, and specifies its use by external verifiers. This differentiates it from sibling like get_agent_jwk.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description implies usage by external verifiers but does not explicitly state when to use this tool vs alternatives like get_agent_jwk, nor does it provide when-not-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_agent_templatesAInspect

List agent templates from the Tenzro Network agent marketplace. Browse available AI agent configurations that can be deployed. Filter by type, tag, creator, or price.

ParametersJSON Schema
NameRequiredDescriptionDefault
tagNoFilter by tag (optional, must have this tag)
limitNoMaximum number of results (default 20, max 100)
offsetNoOffset for pagination
creatorNoFilter by creator address (optional)
free_onlyNoOnly show free templates
template_typeNoFilter by template type (optional)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided; description covers core functionality but lacks behavioral details (e.g., pagination, rate limits, data freshness). Adequate but minimal beyond parameter schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences, front-loaded with action. Efficient but could be slightly clearer on filtering.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a listing tool with 6 optional parameters and no output schema, description adequately covers purpose and filters. Lacks explanation of pagination or return format, but sufficient for a standard list operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage, so baseline is 3. However, description mentions 'price' as a filter, which is not in the schema (only free_only). This misleads about available parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool lists agent templates from the marketplace, with specific verb 'List' and resource 'agent templates'. Differentiates from siblings like search_agent_templates and get_agent_template.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies use for browsing templates, but does not provide explicit guidance on when to use this tool versus alternatives like search_agent_templates.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_audio_catalogAInspect

Browse the curated ONNX ASR catalog: Moonshine v2 (tiny/base, MIT, on-device), Distil-Whisper (small.en/medium.en/large-v3, MIT), Whisper Large-v3-turbo (MIT, flagship), Parakeet-TDT-0.6B-v3 (CC-BY-4.0, 25 European langs), Canary-1B-Flash (CC-BY-4.0, multilingual).

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided; the description lists model examples but lacks any behavioral disclosure (e.g., read-only nature, return format, no side effects). Minimal transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single sentence with front-loaded purpose, but a dense list of models could be slightly restructured for readability. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description should clarify return format (list of model metadata?) – it only shows examples. Adequate but not fully complete for a no-schema tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist (schema coverage 100%), so the baseline is 4. The description adds value by enumerating model specifics, enriching the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool browses a curated ONNX ASR catalog, listing specific models with licenses. It distinguishes from sibling list_audio_models by narrowing to ONNX ASR.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for ONNX ASR models and contrasts with the broader list_audio_models sibling, but offers no explicit when-not-to-use or alternative naming.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_audio_modelsAInspect

List ASR (speech-to-text) models currently loaded on this node (Whisper, Distil-Whisper, Moonshine, Parakeet, Canary). Use list_audio_catalog to browse the curated catalog.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It specifies 'currently loaded on this node' indicating scope and no destructive effects. Does not detail rate limits or auth, but for a simple list tool this is sufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with no filler; front-loaded with purpose and examples, then sibling reference.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, but description enumerates model families and node scope, giving enough context for a simple list tool. No gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters in schema, so baseline is 4. Description adds value by listing example model types (Whisper, etc.), providing context beyond the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists ASR models currently loaded on the node, with a list of specific model families, and distinguishes from the sibling tool 'list_audio_catalog'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says to use list_audio_catalog for browsing the curated catalog, providing clear guidance on when to use this tool versus the alternative.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_bridge_adaptersAInspect

List all registered bridge adapters (LayerZero, Chainlink CCIP, deBridge, Canton)

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description correctly conveys a non-destructive read operation. It adds value by specifying the scope ('registered adapters'), but omits details like authentication needs or return format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise: one sentence that immediately communicates purpose and scope. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description is sufficient for a simple list tool. It could mention output format or pagination, but the current text is adequate for an agent to understand the tool's function.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters in the schema (100% coverage). The description adds no param info, but baseline is 4 for zero-param tools. It correctly implies no inputs needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's action ('List all registered bridge adapters') and provides concrete examples (LayerZero, Chainlink CCIP, deBridge, Canton). It distinguishes from sibling tools as the only listing tool for adapters.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives. While no other sibling tool lists adapters, the description does not explain context or prerequisites for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_canton_domainsAInspect

List Canton synchronizer domains configured on this node. Returns domain IDs, connection status, and participant info for enterprise DAML integration.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. While the verb 'list' implies read-only operation, it does not explicitly confirm non-destructive behavior, authorization needs, or other traits. The description adds the return fields but lacks explicit behavioral disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence that is front-loaded with the verb 'List', directly stating purpose and return content. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has no output schema and no parameters. The description adequately explains the return fields (domain IDs, connection status, participant info) and the context (enterprise DAML integration), which is sufficient for a simple list operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has zero parameters, so schema description coverage is trivially 100%. Baseline for 0 params is 4. The description does not add information about parameters (none exist), but it does list return fields, which is valuable though not parameter-related.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool lists Canton synchronizer domains configured on this node, specifying the return fields (domain IDs, connection status, participant info) and context (enterprise DAML integration). This distinguishes it from other list_* sibling tools through the specific resource and scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives (e.g., list_daml_contracts). The description does not mention prerequisites, exclusions, or provide any usage context beyond the basic purpose.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_daml_contractsAInspect

List active DAML contracts on a Canton domain. Filter by template ID. Returns contract IDs, parties, and payload data for enterprise workflow automation.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of contracts to return (default 50)
domain_idYesCanton domain ID to query contracts from
template_filterNoOptional DAML template filter (e.g. 'MyModule:MyTemplate')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full responsibility for behavioral disclosure. It states that only 'active' contracts are listed and describes the return fields, but does not mention pagination behavior, default sorting, rate limits, or error handling. Adds some context but leaves gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two concise sentences, front-loaded with the main action. Every word serves a purpose; no redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 3 parameters and no output schema, the description adequately covers the purpose, filtering, and return content. It is complete for a list endpoint, though it could mention default limit behavior (already in schema) and that the result is a list.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description reinforces the template filter usage and adds value by describing the return fields (contract IDs, parties, payload data), which are not in the schema. This extra context raises the score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('list'), resource ('active DAML contracts'), scope ('Canton domain'), optional filtering ('Filter by template ID'), and return content ('contract IDs, parties, and payload data'). It effectively distinguishes from sibling tools like 'list_canton_domains' or 'list_tasks'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly mentions the use of a domain ID (required) and an optional template filter. It provides clear context for when to use this tool, though it does not explicitly state when not to use it or mention alternative tools for searching or querying contracts beyond listing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_detection_catalogAInspect

Browse the curated ONNX detection catalog: RF-DETR (nano/small/medium/base/large/2xl) — first real-time detector >60 AP on COCO (ICLR 2026); D-FINE (n/s/m/l/x). All Apache-2.0.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions the catalog is curated and lists model names and licenses, but does not disclose if the tool returns a list, requires authentication, or has any side effects. Minimal behavioral context beyond purpose.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single, well-structured sentence that immediately states the action and scope. No wasted words. Front-loaded with the key verb 'browse'.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, so the description should explain what the tool returns (e.g., list of models with properties). It only describes the catalog content but not the return format. Incomplete for a list tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters and schema coverage is 100%, so the empty schema already documents all inputs. The description adds no param info, but for 0 parameters the baseline is 4; no additional meaning beyond schema is needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool browses a curated ONNX detection catalog, listing specific model families (RF-DETR, D-FINE) and license info. It is a specific verb+resource and distinguishes from sibling tools like list_detection_models.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for browsing the ONNX detection catalog but provides no explicit guidance on when to use this tool versus alternatives like list_models or list_detection_models. No when-not-to or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_detection_modelsAInspect

List object detection models currently loaded on this node (RF-DETR, D-FINE). Use list_detection_catalog to browse the curated catalog.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the burden. It implies a read-only listing but does not explicitly state safety or side effects. The description is minimal but sufficient for a simple list operation; however, it lacks explicit disclosure of behavioral traits like permissions or guarantees.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose, no wasted words. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (no parameters, no output schema, no annotations), the description is largely complete. It states the tool's purpose and differentiates from a sibling. It could optionally describe the return format, but overall it is adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has no parameters (100% coverage), so baseline is 3. The description adds value by mentioning specific model types (RF-DETR, D-FINE) that may be listed, providing context beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'List object detection models currently loaded on this node', specifying the verb (List), resource (object detection models), and scope (currently loaded). It also differentiates from sibling tool list_detection_catalog, which is for browsing the curated catalog.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says when to use this tool (to list currently loaded models) and when to use the alternative (list_detection_catalog for the curated catalog). This provides clear guidance and explicit alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_forecast_catalogAInspect

Browse the curated ONNX forecast (timeseries) catalog. Returns models like Chronos-Bolt, Chronos-2, TimesFM 2.5, Granite-TTM-r2 — each with HF repo, context length, max horizon, and quantile support.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description describes the return content (models and attributes) but provides no behavioral details (e.g., read-only nature, no parameters, no pagination or caching info). With no annotations, the description carries the full burden and is adequate but minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, front-loaded sentence that efficiently conveys purpose, examples, and returned attributes. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Simple tool with no parameters, no output schema, and no annotations. The description covers the catalog nature and return details adequately. Slight gap: could mention it's read-only, but implied.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist (empty input schema), so description has no need to explain parameters. Baseline for zero parameters is 4, and the description correctly implies no inputs are needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool browses a curated ONNX forecast catalog and lists specific model examples (Chronos-Bolt, etc.) with key attributes, distinguishing it from sibling list tools like list_forecast_models by specifying it is curated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool vs alternatives such as list_forecast_models. The description implies browsing the curated catalog but does not mention when not to use or provide comparisons.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_forecast_modelsAInspect

List forecast (timeseries) models currently loaded on this node. Use list_forecast_catalog to browse available models from the curated catalog.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so description carries full burden. It discloses that only models currently loaded on this node are listed, which is a key behavioral trait. However, it does not mention potential side effects, auth needs, or response format, which would be valuable for a listing tool with no output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise at two sentences with no wasted words. Front-loaded with the core purpose, then adds the alternative guidance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While the description covers scope and provides an alternative, it lacks details about return values. With no output schema, the agent is left guessing what information is included (e.g., model IDs, types). For a simple list tool, this is a notable gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Tool has zero parameters; schema coverage is 100% by default. Per scoring guidelines, 0 params yields a baseline of 4. No additional parameter info is needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool lists forecast models currently loaded on the node, with a specific verb and resource scope. It also distinguishes itself from sibling tool list_forecast_catalog by clarifying that the sibling is for browsing the curated catalog.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use this tool vs. the alternative: 'Use list_forecast_catalog to browse available models from the curated catalog.' This provides clear context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_model_endpointsAInspect

List all model service endpoints with their API and MCP URLs, model details, and status

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided. Description only states what is listed, not how it behaves (e.g., authentication, rate limits, ordering). It does not add behavioral context beyond the basic purpose.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, 14 words, front-loads the action. Every word is necessary. No redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema; description covers the output fields adequately. For a simple parameterless list tool, it is sufficient. Could mention typical list properties like ordering or pagination, but not critical.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has no parameters (100% coverage). Description adds value by clarifying the output composition: API/MCP URLs, model details, and status. This compensates for the lack of parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description specifies verb 'List', resource 'model service endpoints', and details what is included (API and MCP URLs, model details, status). It clearly distinguishes from sibling tools like 'list_models' which likely list model metadata only.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives such as 'list_models' or other listing tools. The description does not mention any context or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_modelsBInspect

List AI models available on the Tenzro network. Shows all models with availability: 'local' (served on this node, free), 'network' (available from remote providers, costs TNZO), or 'downloadable' (in catalog but not yet available). Filter by category or search by name

ParametersJSON Schema
NameRequiredDescriptionDefault
nameNoFilter by name substring (optional)
categoryNoFilter by model category/modality: 'text', 'image', 'audio', 'video', 'text_image', 'text_audio', 'multimodal' (optional)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Describes that the tool lists models with availability types (local, network, downloadable) and filtering. No annotations were provided, so description carries the burden; it is reasonably transparent about scope but could be more specific about response format or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with front-loaded purpose. No wasted words; every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately explains what the tool does and how to filter, but lacks details on the output structure (e.g., fields returned, pagination). Without an output schema, this is a notable omission.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Both parameters have descriptions in the schema (100% coverage). The description adds no new semantic information beyond the schema; it simply paraphrases the filters. Thus baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it lists AI models on the Tenzro network with availability statuses. However, it does not explicitly differentiate from sibling list tools like list_audio_models or list_vision_models, but the category filter implies it covers all modalities.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides usage instructions (filtering) but no guidance on when to use this tool versus more specific list tools (e.g., list_audio_models). No when-not or alternative context given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_nft_collectionsBInspect

List all NFT collections registered on the Tenzro ledger. Optionally filter by creator address or NFT standard (erc721/erc1155).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of results (default 50, max 100)
creatorNoFilter by creator address (hex, optional)
standardNoFilter by standard: 'erc721' or 'erc1155' (optional)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It mentions optional filters but omits behavior like pagination, default limit (50), max limit (100), or authorization requirements. Slight inconsistency: says 'list all' but limit param restricts results.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single clear sentence, front-loaded with action and resource. Efficient but omits mention of limit parameter, slightly reducing completeness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema or annotations. Description covers purpose and optional filters but lacks pagination, default behavior, ordering, and any usage context. Adequate but incomplete for a listing tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. Description restates two filters (creator, standard) but adds no extra meaning beyond schema. Does not explain limit parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the action ('List') and resource ('NFT collections'), specifies optional filters by creator and standard, and differentiates from siblings like create_nft_collection and get_nft_info. No tautology.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use or when-not-to-use guidance. While siblings imply context (e.g., get_nft_info for single collection), the description does not directly address alternatives or conditions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_payment_protocolsAInspect

List the payment protocols supported by this Tenzro node, including MPP (session-based streaming), x402 (stateless one-shot), and native TNZO transfers

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must cover behavioral traits. It implies a read operation but does not disclose authentication needs, potential errors, or output format. Minimal transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with action. No wasted words. Efficient structure.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no parameters and simple purpose, description is adequately complete. Could mention output format, but not critical. Sufficient for a list tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has zero parameters with 100% coverage. Description adds no parameter info, but baseline for zero params is 4. No additional semantics needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool lists payment protocols supported by the node, naming specific protocols (MPP, x402, native TNZO). It is a specific verb+resource and distinguishes from other list_* siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like list_x402_schemes or other list tools. No context on when to avoid or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_proposalsAInspect

List governance proposals on the Tenzro Network. Filter by status (active/passed/rejected/pending). Returns proposal IDs, titles, vote tallies, and deadlines.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of results (default 20)
offsetNoPagination offset
statusNoFilter by status: 'active', 'passed', 'rejected', 'pending'. If omitted, returns all.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavior. It mentions return fields and filtering, but lacks details on pagination behavior, default sort order, authentication requirements, or rate limits. The behavioral disclosure is adequate but incomplete.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise—two sentences with no filler. It front-loads the core action and includes essential details. Every sentence serves a purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool and the absence of an output schema, the description covers the key aspects: what is listed, optional filter, and return content. It is nearly complete, though adding a note about pagination behavior would strengthen it.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters. The description adds limited value by repeating the status filter and mentioning output fields, but does not provide additional syntax or format details beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('List governance proposals') and the resource ('on the Tenzro Network'). It specifies the optional filter by status and lists the return fields. This distinguishes it from sibling tools like create_proposal or vote_on_proposal.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by describing what the tool does, but it does not explicitly state when to use this over alternatives or when not to use it. There is no mention of prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_providersAInspect

List all providers discovered on the Tenzro Network. Providers broadcast announcements every 60s on the tenzro/providers gossipsub topic. Returns both the local node (if serving) and all remotely discovered providers. Optionally filter by provider_type: 'llm', 'tee', or 'general'.

ParametersJSON Schema
NameRequiredDescriptionDefault
provider_typeNoOptional filter by provider type: 'llm', 'tee', or 'general'. If omitted, all providers are returned.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It discloses the broadcast frequency (every 60s) and that it returns both local and remote providers. It does not describe side effects or rate limits, but for a list operation the provided behavioral context is reasonable.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences, front-loaded with purpose, followed by technical detail and the filter option. Every sentence contributes information with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple list tool with one optional parameter and no output schema, the description covers the network mechanism, return content, and filter options. It omits precise output format, but the description suffices for understanding the tool's functionality.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already describes the parameter. The description adds the specific allowable values ('llm', 'tee', 'general') and confirms optionality, which matches the schema. This adds small value beyond the schema, earning a baseline 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists all providers discovered on the Tenzro Network, uses specific verbs ('List'), and includes details about the gossipsub topic and return values. It distinguishes itself from siblings like 'list_tee_providers' by covering all provider types and mentioning an optional filter.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions an optional filter by provider_type, giving context for when to use it vs. alternatives. However, it does not explicitly compare to siblings like 'list_tee_providers' or list when the tool is not appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_segmentation_catalogBInspect

Browse the curated ONNX segmentation catalog: SAM 3, SAM 2 (base/large), EdgeSAM, MobileSAM. Wave 1 ships scaffolding — sessions load via tenzro download/serve once the platform downloader is wired.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears full burden. It mentions 'Wave 1 ships scaffolding' hinting at incomplete functionality, but does not disclose behavioral traits like side effects, authentication needs, or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: the first is concise and informative; the second ('Wave 1 ships scaffolding...') is jargon-heavy and may confuse the agent, reducing overall clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a parameterless list tool, the description gives a reasonable overview of purpose and contents. However, it lacks details on output format, current usability (given the scaffolding note), and does not address potential limitations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With zero parameters and 100% schema coverage, the baseline is 3. The description adds value by explicitly listing the models in the catalog (SAM 3, etc.), providing semantic content beyond the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the tool as a catalog browser for ONNX segmentation models, listing specific models (SAM 3, SAM 2, EdgeSAM, MobileSAM). However, it does not explicitly differentiate from sibling tools like list_segmentation_models, which might have a different scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus other list_* catalog tools or list_segmentation_models. The note about Wave 1 scaffolding and downloader wiring is ambiguous and does not clarify appropriate usage contexts.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_segmentation_modelsAInspect

List segmentation models currently loaded on this node (SAM 3, SAM 2, EdgeSAM, MobileSAM). Use list_segmentation_catalog to browse the curated catalog.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description must disclose behavior. It clarifies that the tool shows 'currently loaded on this node,' implying a dynamic list based on loaded models. This is a useful behavioral trait beyond a simple list, though it could be improved by mentioning potential side effects or limitations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no wasted words. The main action is stated first, followed by a helpful direction to an alternative tool. Highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema is provided, and the description does not detail what the output contains (e.g., names, statuses, or metadata). While the tool is simple, the description could be more complete by briefly mentioning the output format or attributes.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters (schema coverage 100%). The description adds value by specifying the scope (currently loaded models) and providing concrete examples, making the tool's action clear without needing parameter details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the tool's purpose: listing segmentation models loaded on the current node, and lists specific example models (SAM 3, SAM 2, EdgeSAM, MobileSAM), distinguishing it from the sibling tool list_segmentation_catalog.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly advises when to use which tool: 'Use list_segmentation_catalog to browse the curated catalog.' This provides clear usage guidance and differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_tasksAInspect

List tasks from the Tenzro Network task marketplace. Filter by type, status, poster address, or maximum price. Defaults to showing open tasks. Use this to discover tasks agents can fulfill.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of results (default 20, max 100)
offsetNoOffset for pagination
posterNoFilter by poster address (optional)
statusNoFilter by status: open, assigned, in_progress, completed, cancelled, expired, disputed (optional, default: open)
task_typeNoFilter by task type (optional)
max_price_tnzoNoMaximum price filter in TNZO (only show tasks at or below this price)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided; description implies a read-only operation but does not explicitly state non-destructive behavior or other traits like rate limits. Adequate but could be improved.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, front-loaded with purpose and filters, no wasted words. Efficient but could be slightly more compact.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers purpose, default behavior, and use case. Lacks details on pagination or output format, but schema fills gaps. Good for a list tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with each parameter described; description adds minimal value beyond schema (e.g., default status). Meets baseline but doesn't compensate further.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states 'List tasks from the Tenzro Network task marketplace' with specific verb and resource, differentiates from other list tools by specifying domain and filters.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states 'Use this to discover tasks agents can fulfill', providing clear usage context but does not mention alternative tools or when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_tee_providersAInspect

List all registered TEE providers on the network, including their TEE type, attestation status, and supported capabilities.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries the burden. It discloses the return contents but lacks details on ordering, pagination, or limits. The description does not contradict any annotations since none exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that conveys essential information without unnecessary words. It is front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple listing tool with no parameters and no output schema, the description covers the key return elements. However, it does not specify if results are paginated or if there are any limitations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, and the input schema is empty with 100% coverage. The description adds no parameter information, which is acceptable given no parameters exist. Baseline is 4 for no params.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states that it lists all registered TEE providers, specifying the included information (TEE type, attestation status, supported capabilities). It distinguishes itself from sibling 'list_providers' by focusing specifically on TEE providers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like 'list_providers' or other listing tools. The description does not mention prerequisites or context for use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_text_embedding_catalogAInspect

Browse the curated ONNX text-embedding catalog: Qwen3-Embedding (0.6B/4B/8B), EmbeddingGemma-300M, BGE-M3, Snowflake Arctic-Embed. Each entry carries embedding_dim, supported Matryoshka truncations, and license tier.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. 'Browse' indicates a read-only operation. The description omits side effects, auth, or rate limits, but for a simple listing tool, the behavioral context is clear.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise, listing the key models and fields in a single front-loaded sentence with no unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description sufficiently explains what the tool returns (list of models with attributes). It is complete for a simple catalog listing tool, though it could explicitly mention it returns a list.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are zero parameters, and the schema coverage is 100%. The description adds no parameter info, but baseline for 0 params is 4 per guidelines.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists a curated ONNX text-embedding catalog, specifying exact models and attributes (embedding_dim, truncations, license). This distinguishes it from sibling tools like list_text_embedding_models which may be broader.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies it's for browsing but does not explicitly mention when to use this tool over alternatives like list_text_embedding_models. No when-not or conditionals provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_text_embedding_modelsAInspect

List text-embedding models currently loaded on this node (Qwen3-Embedding, EmbeddingGemma, BGE-M3, etc.). Use list_text_embedding_catalog to browse the curated catalog.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It adds context by specifying 'currently loaded on this node' and gives model examples. This provides behavioral insight beyond a generic 'list models' description, though it could further clarify dynamic behavior or refresh semantics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two sentences, both essential: one defines the tool, the other provides an alternative. No wasted words, front-loaded with the key action. Exemplary conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no parameters and no output schema, the description fully covers what an agent needs: what it does, what it lists, and how it relates to a sibling. Examples and alternative tool reference make it self-contained.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are zero parameters, so the baseline is 4. The description adds no parameter information because none is needed. It appropriately summarizes the tool's action without needing to describe parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it lists text-embedding models currently loaded on the node, with specific examples (Qwen3-Embedding, EmbeddingGemma, BGE-M3). It explicitly distinguishes from the sibling `list_text_embedding_catalog` by mentioning the alternative for browsing the curated catalog.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description not only tells what the tool does but also advises when to use an alternative tool: 'Use list_text_embedding_catalog to browse the curated catalog.' This provides clear guidance on when not to use this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_tokensCInspect

List all registered tokens in the unified token registry. Optionally filter by VM type or creator address.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tokens to return (default: 50, max: 100)
vm_typeNoFilter by VM type: 'evm', 'svm', 'daml', 'native'
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It does not mention that the operation is read-only, whether authentication is required, or how pagination works (though the 'limit' parameter exists in the schema). The description is too brief to inform an agent about side effects or constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, which is concise and front-loaded with the core action. However, the inaccuracy about the 'creator address' filter makes it less reliable. It could be improved by removing the erroneous filter mention.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 parameters, no output schema, no annotations), the description is incomplete. It does not explain the default limit (50) or max (100) mentioned in the schema, nor does it hint at the response structure. An agent would lack critical context for proper invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description mentions a 'creator address' filter that does not exist in the input schema, causing confusion. The schema covers 100% of the actual parameters (limit, vm_type) with descriptions, so the description adds no meaningful semantic value and instead misleads. Baseline for high coverage is 3, but the inaccuracy drops it to 1.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('List') and the resource ('all registered tokens in the unified token registry'), which is specific and distinguishes it from sibling tools that list other entities. However, it incorrectly mentions a 'creator address' filter that is not present in the input schema, reducing precision.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives (e.g., list_nft_collections, list_models). It only mentions optional filters, which is minimal context. No when-not or exclusion criteria are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_user_walletsAInspect

List all user wallets belonging to an application. Returns wallet addresses, labels, and current balances.

ParametersJSON Schema
NameRequiredDescriptionDefault
app_idYesApplication ID (UUID) to list wallets for
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It indicates a read-like operation ('list') but does not disclose any behavioral traits such as rate limits, authentication requirements, or side effects. The returned fields are mentioned, but not the data format or pagination, which are important for a list operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently conveys the tool's purpose and output. No redundant information, making it concise and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no output schema), the description is sufficiently complete. It explains the action, scope, and return values. While it could mention pagination or ordering, it meets the needs for a basic list operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with one required parameter (app_id). The description adds context by stating 'belonging to an application', which aligns with the parameter's purpose. However, it does not add meaning beyond what the schema already provides, meeting the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists user wallets for an application and specifies the returned fields (addresses, labels, balances). It distinguishes from sibling list tools like list_tokens or list_nft_collections by focusing on user wallets specifically.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context ('belonging to an application') but does not provide explicit when-to-use or when-not-to-use guidance. No alternatives or exclusions are mentioned, leaving the agent to infer based on sibling tool names.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_video_catalogAInspect

Browse the curated ONNX video encoder catalog. Wave 1 returns an empty list — no permissive, ONNX-shippable, encoder-only video model exists yet in the 2026 OSS landscape. Re-evaluated quarterly. The runtime scaffolding is in place so adding entries later is mechanical.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully carries the burden of behavioral disclosure. It honestly states that the tool currently returns an empty list, explains why, and notes it will be re-evaluated quarterly. This is transparent about its current state and future plans.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with three sentences. The first sentence states the purpose, the second provides important behavioral context, and the third discusses future maintenance. No fluff or redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero parameters and no output schema, the description is largely complete: it explains the tool's current behavior and future updates. It could mention the return format (e.g., 'returns a list of entries') but the empty list context makes that acceptable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters, so schema description coverage is 100%. The baseline score for no parameters is 4, and the description does not add any parameter information, which is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Browse the curated ONNX video encoder catalog.' It specifies the verb 'browse' and the resource 'ONNX video encoder catalog,' distinguishing it from sibling tools like list_video_models or list_vision_catalog.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description notes that Wave 1 returns an empty list and is re-evaluated quarterly, implying it is used to check for available ONNX video encoders. However, it does not explicitly mention when to use this tool versus other similar catalog tools, nor does it state when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_video_modelsAInspect

List video encoder models currently loaded on this node. Wave 1 catalog ships empty pending license clearance + ONNX export (V-JEPA 2.1, VideoMAE).

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description bears full burden for behavioral transparency. It discloses that the tool returns currently loaded models, and importantly notes that the catalog may be empty pending license clearance and ONNX export. This is honest about expected behavior and potential limitations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no wasted words. The first sentence delivers the core purpose, and the second adds crucial context about expected emptiness. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero annotations and no output schema, the description is reasonably complete. It explains the resource, scope, and current state. However, it doesn't describe the structure of returned items, but for a list tool, the agent can infer typical model metadata fields from context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters and schema coverage is 100%, so the description adds no parameter information. Baseline is 4 for a no-parameter tool, and the description appropriately focuses on purpose rather than empty parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'list' and the specific resource 'video encoder models currently loaded on this node'. It distinguishes from sibling list tools by specifying 'video encoder' rather than generic 'video models' or other modalities. The additional context about Wave 1 catalog being empty adds clarity about the tool's current state.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use this tool (to check loaded video encoder models) and provides context about likely empty results due to licensing/export issues. While it doesn't explicitly mention alternatives, the sibling names make it clear that other list tools exist for different model types.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_vision_catalogAInspect

Browse the curated ONNX vision encoder catalog: DINOv3 (small/base/large), SigLIP2 (base/large/so400m), CLIP ViT-B/32 + L/14, DINOv2. Each entry carries input_size, embedding_dim, normalization preset, and license tier.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It does not mention read-only nature, authentication requirements, rate limits, or whether the list is static or dynamic. The description only states the content, not operational characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single efficient sentence that front-loads the purpose and lists key models and fields. Every word is informative; no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While the description explains the content of each catalog entry, it does not specify the overall return format (e.g., array, object) or any metadata like pagination. Since there is no output schema, the agent must infer structure from the description.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, so schema coverage is 100%. The description adds value by explaining what information each entry in the catalog contains (input_size, embedding_dim, etc.), which is beyond the empty schema. Baseline for 0 parameters is 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it is for browsing a curated ONNX vision encoder catalog, listing specific models (DINOv3, SigLIP2, CLIP, DINOv2) and the fields each entry carries. This distinguishes it from sibling list_* tools like list_audio_catalog or list_detection_catalog, which deal with different domains.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when an agent needs vision encoder models from the curated catalog, but it does not explicitly differentiate from the sibling tool list_vision_models. No exclusions or alternative tool recommendations are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_vision_modelsAInspect

List vision encoder models currently loaded on this node (DINOv3, SigLIP2, CLIP, etc.). Use list_vision_catalog to browse available models.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully disclose behavior. It states models are 'currently loaded on this node', implying a read operation, but does not mention side effects, authorization needs, rate limits, or performance considerations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is only two sentences, front-loading the purpose and providing an alternative in the second sentence. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple nature (no parameters, no output schema), the description is fairly complete. It states what is listed and how it differs from the catalog. Missing details like return format or ordering, but these are minor for a list tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema is empty (no parameters), so the baseline is 4. The description adds value by specifying the resource type and scope beyond the schema's empty definition.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('List'), the resource ('vision encoder models'), and the scope ('currently loaded on this node'). It provides examples (DINOv3, SigLIP2, CLIP) and distinguishes from the sibling tool list_vision_catalog.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly tells the agent when to use this tool vs. an alternative: 'Use list_vision_catalog to browse available models.' This provides clear guidance on tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_x402_schemesAInspect

List the x402 scheme backends registered on this node. Each scheme corresponds to a different verification path under the x402 protocol: 'tenzro-hybrid' (Ed25519 hybrid sig over canonical preimage), 'exact-eip3009' (USDC EIP-3009 meta-tx via CDP facilitator), 'permit2' (Uniswap Permit2 via CDP facilitator), 'erc7710' (delegation redemption). Use the returned ids in the 'extra.scheme' field of an x402 PaymentRequirement.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It describes the operation as listing, implying read-only, but does not explicitly confirm safety, side effects, or permissions. Basic transparency is present.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences, each serving a distinct purpose: stating the action, listing examples, and providing usage guidance. No redundant content, well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (no parameters, no output schema), the description is mostly complete. It explains what is listed and how to use the results, though the exact structure of the returned items is not detailed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, so baseline score is 4. The description adds value by explaining the meaning of the schemes and how to use the result, which is more than just restating the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists x402 scheme backends and provides examples of schemes. It specifies the verb 'list' and the resource 'x402 scheme backends', distinguishing it from other list_* siblings by being specific to the x402 protocol.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description gives clear context on how to use the output (use returned ids in PaymentRequirement), but does not explicitly state when to use this tool versus alternatives or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_zk_circuitsAInspect

List all available ZK circuits on the node — name, AIR type, field, and hash function.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully communicate behavioral traits. It does not mention whether the list is paginated, if there are any side effects, or any other behavioral details beyond the bare listing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single focused sentence with no extraneous words. The core purpose is front-loaded and immediately understandable.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple list tool with no parameters and no output schema, the description captures the essential return fields. While it could mention potential limitations like pagination, it is reasonably complete for the tool's simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters, and the description adds value by specifying what data the tool returns (name, AIR type, field, hash function), which is above the baseline for a parameterless tool.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action (list) and resource (ZK circuits) and enumerates the fields returned (name, AIR type, field, hash function). It distinguishes itself well from sibling tools like create_zk_proof or verify_zk_proof.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, nor does it mention any prerequisites or exclusions. It simply states what it does.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

mint_nftAInspect

Mint a new NFT in an existing collection. For ERC-721, each token_id is unique. For ERC-1155, you can mint multiple copies of the same token_id. Returns the minted token details.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesHex-encoded recipient address
uriYesToken metadata URI (e.g. 'https://metadata.tenzro.com/collection/1.json')
amountNoAmount to mint (only for ERC-1155, default 1)
token_idYesToken ID within the collection (numeric string, e.g. '1')
collection_idYesCollection ID (UUID from create_nft_collection)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions mutability (minting) and token standard behaviors but does not disclose side effects like on-chain state changes, gas costs, permission requirements, or error scenarios. Critical gaps for a mutation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with front-loaded primary action. No redundancy, but slightly verbose in explaining standards. Efficient overall.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, so description must explain returns: 'Returns the minted token details' is minimal. Missing details on error handling, preconditions (collection must exist, token_id uniqueness), and execution expectations. Could be more complete for a complex minting tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema covers all 5 parameters with clear descriptions (100% coverage). The description adds no new semantics beyond the schema. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (mint a new NFT) and the resource (existing collection). It distinguishes between ERC-721 and ERC-1155, adding nuance. No direct sibling does minting, so differentiation from siblings is implicit but sufficient.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage requires an existing collection and explains token standard differences. However, it does not explicitly state prerequisites or when to use alternatives (e.g., transfer_nft vs mint). This is adequate given the unique tool purpose.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

oauth_discoveryAInspect

RFC 8414 / RFC 9728 OAuth Authorization Server / Protected Resource Metadata. Returns the same metadata document the AS publishes at GET /.well-known/openid-configuration, augmented with AAP-specific extensions (authorization_details_types_supported, aap_claims_supported, dpop_signing_alg_values_supported).

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, but the description fully discloses that the tool returns metadata and lists the extensions. It implies a read-only operation with no side effects, which is adequate for this simple tool. However, it could explicitly state its read-only nature.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences precisely define the tool's purpose and content. No extraneous words. Front-loaded with standard references, then specifics. Excellent conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter tool with no output schema, the description covers what the tool returns (metadata document with extensions). The complexity is low, and the description is complete for an agent to understand its function.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters in input schema (0 params), so baseline 4. Description does not need to add parameter information; it's sufficient.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it returns OAuth Authorization Server metadata per RFC 8414/9728 with specific AAC extensions. The verb 'returns' and resource 'metadata document' specify exactly what the tool does, and the mention of AAP-specific extensions distinguishes it from standard OAuth discovery tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description implies usage for getting OAuth metadata, but does not explicitly state when to use or alternatives among sibling tools like introspect_token. The context is clear enough for a simple metadata fetch tool, but lacks explicit guidance on alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

open_payment_channelAInspect

Open a micropayment channel for off-chain per-token billing. Sender deposits TNZO into the channel for streaming payments to the recipient. Returns the channel ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
senderYesHex-encoded sender address (opens and funds the channel)
recipientYesHex-encoded recipient address
deposit_tnzoYesInitial deposit amount in TNZO
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must fully disclose behavior. It reveals the core action (deposit and stream) but omits details like the sender's balance impact, potential fees, or whether a duplicate channel can be created for the same pair.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two precise sentences with no wasted words. Essential information is front-loaded (action, purpose, return value).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of off-chain billing and the absence of an output schema, the description could explain the lifecycle (e.g., how streaming works) or link to related tools like 'settle_payment' for completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema descriptions cover all three parameters with clear explanations. The description adds value by clarifying that the sender 'opens and funds' the channel, and notes the deposit is in TNZO.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool opens a micropayment channel for off-chain per-token billing, specifies the action (deposit and stream), and mentions the return value (channel ID). It distinguishes from siblings like 'close_payment_channel' and 'settle_payment'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives (e.g., when a channel already exists). No prerequisites or context for optimal use are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pause_agentAInspect

Pause an agent (reversible). Halts all outbound A2A messaging and inference dispatch but preserves stake. Requires controller_did to match the agent's registered controller. NOTE: this MCP tool describes the operation only — the transaction must be signed and submitted via tenzro_signAndSendTransaction with type=PauseAgent.

ParametersJSON Schema
NameRequiredDescriptionDefault
reasonYesFree-text reason recorded on the kill-switch receipt
agent_didYesDID of the agent to pause
controller_didYesDID of the controller authorizing the pause (must match controller_did on the agent identity)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It discloses that the operation is reversible, what it halts, and that stake is preserved. Also clarifies that this MCP tool only describes the operation and requires separate signing, adding transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences plus a note, extremely concise and front-loaded with the core action. Every sentence adds value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, no output schema, and 3 parameters, the description covers purpose, behavioral effects, prerequisite, and execution method. Complete for a simple reversible pause operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description does not add extra meaning beyond what the schema already provides for the three parameters (agent_did, controller_did, reason). Adequate but no enhancement.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it pauses an agent, is reversible, and specifies what it halts (outbound A2A messaging and inference dispatch) while preserving stake. It distinguishes from siblings like terminate_agent via reversibility hint.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Describes prerequisite (controller_did must match) and notes that the transaction must be signed via tenzro_signAndSendTransaction. Does not explicitly mention when not to use or compare with alternatives, but provides sufficient context for correct usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

post_agent_bondAInspect

Post an AgentBond surety against an agent DID (Agent-Swarm Spec 9). An Active bond ≥ bond_min_for_promotion promotes a Machine identity into the Delegated admission lane even when its controller is unknown / sub-Enhanced KYC. NOTE: this MCP tool describes the operation only — sign and submit a PostAgentBond transaction via tenzro_signAndSendTransaction.

ParametersJSON Schema
NameRequiredDescriptionDefault
fromYesController wallet address (hex). Must match the signing key.
amountYesBond amount in wei as a decimal string (1 TNZO = 10^18 wei)
agent_didYesDID of the agent the bond is posted against (e.g. did:tenzro:machine:...)
controller_didYesController DID (e.g. did:tenzro:human:...) authorizing the bond
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Since no annotations are provided, the description carries full burden. It includes an important note that the tool only describes the operation and that the actual transaction must be signed and submitted via tenzro_signAndSendTransaction. This is good behavioral disclosure, though it could mention more about side effects or prerequisites.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise: two sentences and a note. Every sentence adds value, no wasted words. The note about the sign-and-submit pattern is well-placed.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool lacks an output schema, and the description does not explain what the tool returns. It states the tool 'describes the operation only', but it is unclear what the agent receives (e.g., transaction data to sign) and how to proceed. This is a significant gap for an actionable tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with descriptions for all parameters. The tool's description does not add significant additional meaning beyond what is already in the schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: posting an AgentBond surety against an agent DID. It is specific about the resource (AgentBond) and action (post), and references the Agent-Swarm Spec 9, which distinguishes it from other tools on the server.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use the tool: when a bond is needed to promote a Machine identity into the Delegated admission lane, especially when the controller is unknown or sub-Enhanced KYC. However, it does not explicitly state when not to use it or mention alternative tools like get_agent_bond, which is a sibling.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

post_taskBInspect

Post a new task to the Tenzro Network task marketplace. Returns the created task with its UUID. Tasks can request AI inference, code review, data analysis, content generation, agent execution, translation, or research.

ParametersJSON Schema
NameRequiredDescriptionDefault
inputYesInput data or prompt for the task
titleYesShort title for the task (e.g. 'Translate README to Spanish')
deadlineNoOptional: Unix timestamp deadline for task completion
priorityNoTask priority: low, normal, high, urgent (default: normal)
task_typeYesTask type: inference, code_review, data_analysis, content_generation, agent_execution, translation, research, or custom:<value>
descriptionYesDetailed description of the task and expected output
max_price_tnzoYesMaximum price willing to pay in TNZO (e.g. 1.5 for 1.5 TNZO)
poster_addressYesHex-encoded address of the task poster
required_modelNoOptional: minimum model size required (e.g. '7b', '70b')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose all behavioral traits. It states the tool creates a task and returns a UUID, which implies mutation. However, it omits details like required balances, confirmation steps, or whether posting is immediately visible. It adds some value but falls short given the lack of annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no redundancy. Every word serves a purpose, and the description is front-loaded with the core action and outcome.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 9 parameters and no output schema, the description covers the essential return value but lacks detail on the task lifecycle, pricing mechanics, or success/failure conditions. It is minimally viable but not comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so all parameters have descriptions in the schema. The tool's description adds only a list of task types (already in schema), providing marginal additional meaning. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('post') and the resource ('new task to the Tenzro Network task marketplace'), and mentions the return value (UUID). It implicitly distinguishes from sibling tools like 'complete_task' or 'cancel_task' by focusing on creation, but lacks explicit differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives (e.g., for modifying a task or viewing existing ones). The description only explains what it does, not the context or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

quarantine_agentAInspect

Quarantine an agent (reversible). Halts messaging AND freezes all stake (blocks unstake/withdraw, allows slash). Requires controller_did to match the agent's registered controller. Optionally accepts a 32-byte evidence hash for off-chain audit linkage. NOTE: this MCP tool describes the operation only — the transaction must be signed and submitted via tenzro_signAndSendTransaction with type=QuarantineAgent.

ParametersJSON Schema
NameRequiredDescriptionDefault
reasonYesFree-text reason recorded on the kill-switch receipt
agent_didYesDID of the agent to quarantine (freezes stake, blocks all messaging)
evidence_hashNoOptional 32-byte evidence hash (hex-encoded, with or without 0x prefix)
controller_didYesDID of the controller authorizing the quarantine
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Fully discloses behavioral traits: halts messaging, freezes stake, blocks unstake/withdraw but allows slash, and notes reversibility. Also clarifies that it only describes the operation and transaction must be signed separately, which is critical for agent understanding.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences: first defines the operation, second details effects and prerequisite, third clarifies transaction signing. Efficient and front-loaded with key information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers purpose, effects, prerequisites, optional parameter details, and transaction workflow. Lacks information on return values or errors, but overall adequate for the tool's complexity (4 params, no output schema).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. Description adds minor value by specifying that evidence_hash is hex-encoded with or without 0x prefix, but mostly restates schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Quarantine an agent (reversible)' and details the specific effects (halts messaging, freezes stake). Distinguishes from sibling tools like 'pause_agent' and 'terminate_agent' through the reversible nature and freeze action.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit prerequisite: 'Requires controller_did to match the agent's registered controller.' Also mentions optional evidence hash for audit. However, it does not explicitly state when to use quarantine vs pause or terminate, though the effects imply distinctions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

quote_taskAInspect

Submit a price quote for a task in the Tenzro Network task marketplace. Providers call this to bid on open tasks. The quote includes price, model to use, and estimated completion time.

ParametersJSON Schema
NameRequiredDescriptionDefault
notesNoOptional notes from the provider
task_idYesUUID of the task to quote
model_idYesModel ID the provider will use to complete the task
confidenceNoProvider confidence score 0-100
price_tnzoYesQuoted price in TNZO (e.g. 0.5)
estimated_secsYesEstimated time to complete the task in seconds
provider_addressYesHex-encoded address of the provider submitting the quote
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so the description must disclose behavioral traits. It only says 'Submit a price quote' but fails to mention side effects (e.g., creates a pending quote), permissions, or lifecycle dependencies (e.g., need for assign_task after acceptance). Limited transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no redundancy. First sentence states action and resource; second adds user role and components. Efficient and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 7 parameters, no output schema, and no annotations, the description covers purpose and ingredients but omits behavioral effects (e.g., immediate quote creation) and workflow context (e.g., next steps after quoting). Adequate but not fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description mentions 'price, model to use, and estimated completion time,' which adds a summary but does not provide additional meaning beyond the existing schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states verb 'Submit a price quote for a task' and the resource ('Tenzro Network task marketplace'). It distinguishes from siblings by specifying it's for providers bidding on open tasks, contrasting with assign_task, complete_task, etc.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description explicitly states 'Providers call this to bid on open tasks,' indicating who and when to use it. No explicit when-not-to-use, but the context of sibling tools like assign_task implies alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rate_agent_templateAInspect

Rate an agent template on the Tenzro Network marketplace. Ratings are 1-5 and help others discover quality templates. Optionally include a text review.

ParametersJSON Schema
NameRequiredDescriptionDefault
ratingYesRating from 1 (worst) to 5 (best)
reviewNoOptional review comment
template_idYesAgent template ID to rate
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of disclosing behavioral traits. It states that ratings are 1-5 and optionally include a review, but it does not reveal whether ratings can be updated, if authentication is required, or if the action is reversible. These are significant gaps for a write operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise, using two short sentences that front-load the action and resource. Every word adds value, with no redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple rating tool with 3 parameters and no output schema, the description covers the basics but misses important context like return values, update behavior, or authentication requirements. It is adequate but not complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description reiterates the rating scale and optional review, adding minor context but not introducing new information beyond what's already in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Rate'), the resource ('agent template'), and the context ('Tenzro Network marketplace'). It also specifies the rating scale (1-5) and mentions optional reviews. This distinctively separates it from sibling tools like register, update, or list agent templates.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for rating templates to help others discover quality templates, but it does not explicitly state when to use this tool versus alternatives (e.g., for creating or updating templates). No when-not or preconditions are provided, leaving the agent to infer.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

refund_escrowAInspect

Refund escrowed funds back to the payer via a signed RefundEscrow transaction. Only the original payer can refund, AND the escrow must be expired (or use Timeout/Custom release conditions).

ParametersJSON Schema
NameRequiredDescriptionDefault
payerYesHex-encoded payer address (must match the bearer's wallet)
escrow_idYes32-byte escrow ID (hex, with or without 0x prefix)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses key behavioral traits: it performs a refund (likely modifying state), requires the caller to be the original payer, and the escrow to be expired or under specified conditions. It does not explicitly state potential errors or side effects, but the information is sufficient for an agent to understand the tool's behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences: the first states purpose and mechanism, the second adds critical usage conditions. No redundant information, and the key points are front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with two well-described parameters and no output schema, the description covers purpose, mechanism, and conditions comprehensively. An agent can determine when and how to use it correctly without ambiguity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, meaning the input schema already describes both parameters (payer and escrow_id) with meaning. The tool description adds no additional information about the parameters beyond what is in the schema, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool refunds escrowed funds to the payer via a signed transaction. It uses specific verb ('Refund') and resource ('escrowed funds'), and distinguishes from sibling tools like create_escrow and release_escrow by specifying the refund action and conditions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states the conditions for use: 'Only the original payer can refund' and 'the escrow must be expired (or use Timeout/Custom release conditions).' This tells the agent when to call and when not to, though it does not name alternative tools for other scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

register_agentAInspect

Register a new AI agent identity on the Tenzro Network. Creates a TDIP DID with an auto-provisioned MPC wallet. Returns the agent DID and wallet address.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesHuman-readable agent name
endpointNoOptional endpoint URL for remote agent invocation
agent_typeYesAgent type: 'autonomous', 'assistant', 'validator', 'oracle'
capabilitiesYesList of capability names this agent declares (e.g. ['inference', 'settlement'])
controller_didYesDID of the controller (human or machine) that owns this agent
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses the creation of a TDIP DID and auto-provisioned MPC wallet, but omits details like required permissions, costs, or side effects beyond the return values.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no extraneous information. The first sentence states the purpose, and the second describes the outputs. Perfectly concise and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a creation tool with no output schema and no annotations, the description should specify the return format (e.g., structure of DID and wallet address) and any prerequisites (e.g., need for a valid controller DID). Currently, it leaves gaps for an AI agent relying solely on the description.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the schema already provides detailed descriptions for each parameter. The tool description does not add any additional meaning or context beyond what the schema offers.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the action 'Register a new AI agent identity', specifies the network 'Tenzro Network', and lists outputs 'agent DID and wallet address'. This distinguishes it from sibling tools like register_identity or register_agent_template.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for registering AI agents but does not explicitly state when to use this tool over alternatives, nor does it provide any exclusions or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

register_agent_templateBInspect

Publish an agent template to the Tenzro Network agent marketplace. Templates define reusable AI agent configurations with system prompts, capabilities, and pricing. Others can discover and deploy your template.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesName of the agent template
tagsNoComma-separated tags for discoverability (e.g. 'coding,rust,debugging')
pricingNoPricing model: free, per_execution:<price>, per_token:<price>, subscription:<monthly>, revenue_share:<bps>
versionNoVersion string in semver format (default: 1.0.0)
docs_urlNoURL to documentation or repository
creator_didNoOptional creator DID binding (e.g. 'did:tenzro:human:uuid' or 'did:tenzro:machine:uuid'). If provided, the template is publicly attributed to this identity.
descriptionYesDetailed description of what the agent does
system_promptYesSystem prompt / agent instructions
template_typeYesTemplate type: autonomous, tool_agent, orchestrator, specialist, multi_modal, or custom:<value>
creator_walletNoOptional hex-encoded payout wallet address for creator revenue. REQUIRED when pricing is not 'free' — paid templates without a payout wallet will be rejected.
creator_addressYesHex-encoded address of the template creator
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description bears full responsibility for behavioral disclosure. It confirms creation but omits side effects (e.g., whether duplicates are allowed, if updates require replacement, or the need for authorization). The mutation impact is vague.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three concise sentences that front-load the core purpose. It avoids redundancy but could be slightly more structured (e.g., separating output from usage).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the 11 parameters and no output schema, the description is adequate but incomplete. It explains the marketplace context and end-goal (discovery and deployment) but does not describe the return value (e.g., template ID) or error conditions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds a high-level overview ('Templates define reusable AI agent configurations...') but does not provide additional semantic context that the schema lacks.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the action (publish/register) and the resource (agent template) in the context of the Tenzro Network marketplace. It distinguishes from siblings like get_agent_template, update_agent_template, and list_agent_templates by focusing on creation and publication.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description states the purpose (publish to marketplace) but gives no explicit guidance on when to use versus alternatives (e.g., update_agent_template for editing, run_agent_template for execution). Prerequisites like ownership or wallet setup are not mentioned. This leaves room for ambiguity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

register_appAInspect

Register a new application on the Tenzro Network. The master wallet address will sponsor gas for user operations. Returns the app ID and API key.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesApplication name
master_wallet_addressYesHex-encoded master wallet address that funds user operations
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It discloses a key behavior (master wallet sponsors gas) but omits potential side effects, costs, permissions, or rate limits, making it adequate but not thorough.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose and return values, no redundancy or fluff. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description covers returns and a behavioral aspect. However, it lacks details on transaction latency, name uniqueness, or approval processes, which would be expected for a registry tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with descriptions for both parameters. The description adds no new details beyond reinforcing the schema, so a baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (register), resource (application), network (Tenzro), and return values (app ID and API key), effectively distinguishing it from sibling registration tools like register_agent or register_provider.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use or when-not-to-use guidance is provided. The mention of gas sponsorship implies context but does not differentiate from similar tools; usage is inferred but not explicitly guided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

register_complianceAInspect

Register ERC-3643 compliance rules for a token. Defines KYC requirements, holder limits, country restrictions, and balance caps. All transfers of this token will be checked against these rules.

ParametersJSON Schema
NameRequiredDescriptionDefault
token_idYesToken ID (hex, 32 bytes) or symbol
max_holdersNoMaximum number of token holders (0 for unlimited)
require_kycYesRequire KYC verification for all holders
min_kyc_tierYesMinimum KYC tier: 0 (unverified), 1 (basic), 2 (enhanced), 3 (full)
allowed_countriesNoAllowed country codes (ISO 3166-1 alpha-2, e.g. ['US', 'GB', 'DE']). Empty array means all allowed.
blocked_countriesNoBlocked country codes
max_balance_per_holderNoMaximum amount a single address can hold (base units, 0 for unlimited)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description carries the full burden. It discloses that the tool affects transfer behavior (all transfers checked), but lacks details on idempotency, permissions, or overwrite behavior. Adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the action and standard, followed by a clear list of rule types and consequence. No redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the comprehensive schema (all params described) and no output schema, the description covers the tool's purpose and effect. It lacks success response details and prerequisites, but overall sufficient for agent invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds value by categorizing parameters (KYC, holder limits, etc.) and referencing ERC-3643, which provides context beyond the schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it registers ERC-3643 compliance rules for a token, specifying the resource (token) and action (register). It distinguishes from sibling 'check_compliance' by focusing on registration and the effect on transfers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies this tool is for setting compliance rules and that transfers will be checked against them, providing clear context. However, it does not explicitly mention when not to use it or name alternatives beyond the implied check.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

register_identityAInspect

Register a human or machine identity on the Tenzro Decentralized Identity Protocol (TDIP). Human identities are self-sovereign; machine identities require a controller DID and receive a delegation scope

ParametersJSON Schema
NameRequiredDescriptionDefault
display_nameYesDisplay name for the identity
identity_typeYesIdentity type: 'human' or 'machine'
controller_didNoController DID (required for machine identities, e.g. did:tenzro:human:uuid)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It mentions that machine identities receive a delegation scope and require a controller DID, but does not disclose side effects (e.g., on-chain creation, costs, reversibility) or error conditions. The description is somewhat transparent but lacks depth.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with the main action. Every part is informative and non-redundant. The structure is efficient and clear.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has 3 parameters and no output schema or annotations. The description covers the key human/machine distinction but omits expected output (e.g., a DID), error conditions, and any preconditions or limitations. It is adequate but not fully complete given the complexity of identity registration.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already documents all parameters. The description adds value by explaining that 'controller_did' is required for machine identities and that machine identities receive a delegation scope, which goes beyond the schema's minimal descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'register' and the resource 'identity' on TDIP, and distinguishes between human and machine identities with specific traits. It differentiates from sibling tools like 'resolve_did' or 'register_agent' by providing protocol-specific context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains the difference between human and machine identities, noting that machine identities need a controller DID and get a delegation scope. However, it does not explicitly say when not to use this tool or mention alternative registration methods.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

register_nft_pointerBInspect

Register a cross-VM pointer for an NFT collection. Maps the collection to a contract address on another VM (EVM, SVM, or DAML) enabling cross-VM NFT discoverability.

ParametersJSON Schema
NameRequiredDescriptionDefault
vmYesTarget VM: 'evm', 'svm', or 'daml'
addressYesContract address on the target VM (hex-encoded)
collection_idYesCollection ID (UUID)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It only states the intent ('registers', 'maps') but does not reveal side effects, idempotency, permission requirements, error handling, or what happens if the collection is already mapped. A write operation lacking these details is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two concise sentences that efficiently convey the core purpose and mechanism. It front-loads the action and uses no filler words, making it easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While the description covers the basic purpose and parameters, it omits information about return value, error conditions, and behavioral guarantees. Given no output schema and no annotations, the description should provide a bit more context on what the agent can expect after execution.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for all three parameters, so the schema already documents their meaning. The description adds no further semantics or constraints beyond the schema, meeting the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the action 'Register', the resource 'cross-VM pointer for an NFT collection', and the outcome 'maps the collection to a contract address on another VM enabling cross-VM discoverability'. It clearly distinguishes this tool from other NFT-related siblings like create_nft_collection or transfer_nft by focusing on cross-VM mapping.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, nor does it mention prerequisites or exclusions. For example, it doesn't indicate that the NFT collection must already exist or that you might use get_nft_info afterward to verify.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

register_providerBInspect

Register as a service provider on the Tenzro Network. Providers earn TNZO by serving AI models (inference), providing TEE enclaves (security), validating blocks, or providing storage.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesProvider display name
stakeNoInitial stake amount in TNZO (e.g. 1000.0)
provider_typeYesProvider type: 'validator', 'model_provider', 'tee_provider', or 'storage_provider'
max_concurrentNoMaximum concurrent requests to handle (default 10)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears full burden. It mentions 'Register' and earning TNZO, but does not disclose side effects, required permissions, or confirmation steps. The description is minimal for a creation action.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, concise and to the point. It could be slightly more structured, but it efficiently communicates the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 4 parameters, no annotations, and no output schema, the description is too short. It lacks details on return value, confirmation, or what happens after registration, leaving the agent underinformed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds no extra meaning beyond the schema; all parameters are already explained in the input schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool registers a service provider on the Tenzro Network and lists the types of providers (inference, TEE, validator, storage). It distinguishes itself from sibling tools like list_providers or set_provider_pricing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not specify when to use this tool versus alternatives, such as registering an agent or app. It lacks prerequisites or conditions for use, providing no guidance on decision-making.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

register_webhookBInspect

Register a webhook URL for event notifications. The Tenzro node will POST JSON event payloads to the registered URL when matching events occur. Each POST includes an HMAC-SHA256 signature in the X-Tenzro-Signature header for verification.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesHTTPS URL to receive webhook POST notifications
secretYesShared secret for HMAC-SHA256 webhook signature verification
addressesNoFilter by involved addresses (hex, optional)
event_typesNoEvent types to subscribe to: 'transfer', 'mint', 'burn', 'stake', 'bridge', 'settlement', 'nft_transfer', 'compliance'
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It discloses that the tool registers a webhook, specifies that the node will POST JSON payloads with an HMAC-SHA256 signature header, which is good. However, it omits details like idempotency, rate limits, error handling, or what happens if the URL becomes unreachable.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, consisting of only two sentences that directly convey the essential information without any fluff. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers the registration process and the behavior of the webhook (POST with signature). However, it does not mention the return value or confirmation, which is typical for registration tools. Given the complexity (4 params, no output schema), a bit more detail on the response would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already provides 100% coverage with descriptions for all four parameters. The description adds no new parameter information beyond the schema, but it does mention the 'X-Tenzro-Signature' header, which is a minor addition. Baseline 3 applies as schema coverage is high.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool registers a webhook URL for event notifications, specifying the action (register), resource (webhook URL), and purpose (receive event notifications). However, it does not explicitly differentiate from the sibling tool 'subscribe_events', which may serve a similar purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It does not mention prerequisites, limitations, or when not to use it. The description is purely operational without context for decision-making.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

release_escrowBInspect

Release escrowed funds to the payee via a signed ReleaseEscrow transaction. Only the original payer can release.

ParametersJSON Schema
NameRequiredDescriptionDefault
payerYesHex-encoded payer address (must match the bearer's wallet — only the payer can release)
escrow_idYes32-byte escrow ID (hex, with or without 0x prefix)
proof_data_hexNoOptional hex-encoded proof bytes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It only mentions the release action and payer restriction. No disclosure of side effects (e.g., escrow closure), error conditions, or what happens after a successful release.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences: first states the action, second adds the critical constraint. No unnecessary words or repetition.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description lacks information about return values or output (no output schema), error cases, or prerequisites beyond the payer constraint. For a transaction-based tool, this is incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the schema descriptions for parameters are already detailed (e.g., payer address restriction). The main description adds minimal extra meaning beyond restating the payer constraint.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action 'release escrowed funds' and specifies the resource and scope. It distinguishes from siblings like 'create_escrow' and 'refund_escrow' by noting that only the original payer can release.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes a key constraint ('Only the original payer can release') but does not explicitly guide when to use this tool versus alternatives (e.g., refund_escrow). No mention of prerequisites or when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

request_faucetAInspect

Request testnet TNZO tokens from the faucet (100 TNZO per request, 24-hour cooldown per address)

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesHex-encoded recipient address (with or without 0x prefix)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description bears full burden. Discloses amount and cooldown but omits error handling (e.g., invalid address), result format, and any rate limits beyond cooldown. Partially transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence front-loads purpose and key constraints. No extraneous words. Perfectly concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Provides essential info for a simple faucet tool (amount, cooldown) but lacks details on return value (e.g., transaction hash) and error states. Since no output schema exists, description should cover these minimally.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Single parameter 'address' has full schema coverage (100%) with clear description. Description adds no further meaning; baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the verb 'Request' and the resource 'testnet TNZO tokens from the faucet', with specific constraints (100 TNZO per request, 24-hour cooldown). Distinguishes from all sibling tools, as no other faucet-related tool exists.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage context: when testnet tokens are needed. Does not explicitly state when not to use or list alternatives, but the context is clear given no competing tools. Lacks prerequisites like network requirement.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

resolve_didAInspect

Resolve a DID to its identity information, including display name, type, status, and delegation scope for machine identities

ParametersJSON Schema
NameRequiredDescriptionDefault
didYesDID to resolve (e.g. did:tenzro:human:uuid or did:tenzro:machine:controller:uuid)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided; description does not disclose whether the operation is read-only, requires authentication, has rate limits, or any side effects. The behavioral burden falls entirely on description, which is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, perfectly concise and front-loaded with action and resource, no extraneous words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately covers basic purpose and output fields, but given no output schema or annotations, more detail on resolution behavior, error cases, or supported DID formats would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and already fully describes the 'did' parameter. The description adds no extra meaning beyond what the schema provides, so baseline of 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states verb 'Resolve', resource 'DID', and lists specific output fields (display name, type, status, delegation scope), distinguishing it from sibling tools like 'resolve_username'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool vs alternatives (e.g., 'resolve_username' for usernames). The description implies use when you have a DID, but lacks when-not and context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

resolve_usernameBInspect

Resolve a username to its DID on the Tenzro Network. Returns the DID associated with the given username.

ParametersJSON Schema
NameRequiredDescriptionDefault
usernameYesUsername to resolve to its DID
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so description must carry behavioral info. It only states the operation (resolve) and return (DID), but omits side effects, permissions, or error conditions. The read-only nature is implied but not explicit.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no wasted words. The most critical information is front-loaded. Extremely concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple one-parameter lookup returning a DID, the description is reasonably complete. However, it lacks any mention of error conditions (e.g., username not found) and does not distinguish from the closely related sibling 'resolve_did'. Adequate but not thorough.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%. The parameter description in schema ('Username to resolve to its DID') is identical to the tool description, adding no extra meaning. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool resolves a username to a DID, using a specific verb and resource. While sibling 'resolve_did' exists, the description implies the opposite direction. The tool is well-differentiated in purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives like 'resolve_did'. No prerequisites or context cues provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

revoke_sessionAInspect

Revoke an active wallet session immediately. The session ID becomes invalid and any pending operations under this session are cancelled.

ParametersJSON Schema
NameRequiredDescriptionDefault
session_idYesSession ID (UUID) to revoke
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses that the session ID becomes invalid and pending operations are cancelled, which are key behavioral traits. However, it omits details about permissions, irreversibility, or impact on other sessions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with the action and followed by effects. No redundant words; every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (1 required parameter, no output schema), the description covers the purpose and outcome completely. No critical information is missing.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%—the schema already describes 'session_id' as 'Session ID (UUID) to revoke.' The description adds no new constraints or examples beyond reaffirming the parameter's role.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Revoke an active wallet session immediately,' specifying the verb 'revoke' and resource 'wallet session.' This distinguishes it from sibling tools like 'authorize_session' and other session-related operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description implies use for invalidating a session, it does not explicitly provide when-to-use or when-not-to-use guidance, nor does it mention alternatives like 'close_payment_channel' or 'cancel_task.' Usage is implied but not fully explicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rotate_keysAInspect

Rotate the key shares of an MPC wallet. Generates new shares while keeping the same public key and address. Old shares become invalid.

ParametersJSON Schema
NameRequiredDescriptionDefault
wallet_idYesWallet ID (UUID) to rotate keys for
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses that old shares become invalid, which is an important side effect. However, with no annotations, the description could detail authorization needs, reversibility, or other behavioral traits beyond the single sentence.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences front-loading the action and key outcome (invalid old shares). No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple single-parameter tool with no output schema, the description covers the main action and a notable behavioral trait. Could be slightly more complete by mentioning any required wallet state or permissions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with wallet_id described. The description adds no additional parameter semantics beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it rotates key shares of an MPC wallet, generating new shares while keeping the same public key and address. This distinguishes it from similar wallet operations like create_mpc_wallet or derive_key.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage for rotating keys but provides no explicit guidance on when to use this tool versus others, nor does it mention prerequisites or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

run_agent_taskAInspect

Run an agentic task loop for an agent. The agent calls an LLM with built-in tools (spawn_agent, delegate_task, collect_results, complete) and executes them iteratively until done or the maximum step limit is reached.

ParametersJSON Schema
NameRequiredDescriptionDefault
taskYesTask description for the agentic execution loop
agent_idYesAgent ID that will execute the task
inference_urlNoOptional inference endpoint URL (default: localhost)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses key behaviors: the agent calls an LLM with built-in tools and executes iteratively until done or a maximum step limit. This provides clear insight into the loop's operation, though it omits details on failure modes, default step limits, or authorization requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is compact with two sentences: the first concisely states purpose, and the second explains the iterative process. Every sentence adds meaningful information without redundancy, and the essential details are front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While the description captures the core behavior (iterative LLM loop with tools), it lacks details about return values, failure handling, or default step limits. Given the absence of an output schema, the description should hint at what the agent returns. This gap prevents full completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for all three parameters (task, agent_id, inference_url). The description adds no additional semantic value beyond reiterating the task's role in the loop. Thus, the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it runs an agentic task loop for an agent, specifying the verb 'run' and resource 'agent'. It details that the agent calls an LLM with built-in tools (spawn_agent, delegate_task, collect_results, complete) and executes iteratively, which distinguishes it from sibling tools like assign_task or post_task that likely lack this autonomous loop.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for autonomous agentic tasks but does not provide explicit guidance on when to use this tool versus alternatives like assign_task or run_agent_template. No 'when not to use' or named alternatives are given, leaving the agent to infer context from the tool name and sibling list.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

run_agent_templateAInspect

Run (invoke) a spawned agent template end-to-end. For paid templates, the payer_wallet is charged: the network commission (5%) goes to the treasury and the remainder is paid to the template's creator_wallet. tokens_estimate is used for per-token pricing. Set dry_run=true to simulate without charging fees or dispatching real transactions. Successful non-dry-run invocations are metered (invocation_count + total_revenue) and persisted.

ParametersJSON Schema
NameRequiredDescriptionDefault
dry_runNoIf true, simulate execution without dispatching real transactions or charging fees (default false)
agent_idYesUUID of the spawned agent to run (must have been created via spawn_agent_template first)
payer_walletNoHex-encoded payer wallet address. REQUIRED for paid templates — will be charged the network commission (to treasury) and creator payout.
max_iterationsNoMaximum iterations through the template's execution steps (default 1)
tokens_estimateNoEstimated token usage for per-token pricing. Ignored for free/per_execution/subscription pricing. Default 0.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description effectively discloses key behaviors: fee splitting for paid templates, per-token pricing, dry-run simulation, and metering side effects. This is thorough for a transaction tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (3 sentences) and front-loaded with the main action. Each sentence adds value, covering purpose, financial details, dry_run, and metering without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema and annotations, the description covers essential aspects: invocation purpose, charging logic, dry_run, metering, and parameter roles. It doesn't address return values or errors, but remains adequate for a moderately complex tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds meaning by explaining how payer_wallet is charged, tokens_estimate used for pricing, and dry_run purpose, enhancing understanding beyond the schema's field descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Run (invoke) a spawned agent template end-to-end', using a specific verb and resource, and subtly distinguishes from related tools like spawn_agent_template.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on when to use (for spawned agent templates) and important usage nuances (paid templates, dry_run), but doesn't explicitly list when not to use or alternative tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

seal_dataAInspect

Seal (encrypt) data within the TEE enclave using hardware-derived keys. The sealed data can only be unsealed on the same TEE platform with the same key_id.

ParametersJSON Schema
NameRequiredDescriptionDefault
key_idYesKey ID for sealing (used for key derivation)
data_hexYesHex-encoded data to seal (encrypt) within the TEE enclave
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description carries full burden. It discloses the binding to platform and key_id, but does not mention permissions, side effects, reversibility, or performance implications. Some transparency but gaps remain.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with no filler. The verb and resource are front-loaded. Every word adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema and no description of output format. Constraints on input length, error conditions, or prerequisite TEE attestation are missing. For a cryptographic tool, more detail is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions. The tool description adds only the context of TEE and hardware-derived keys, not specific parameter details. No added value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('seal', encrypt) and the specific context (TEE enclave, hardware-derived keys), and distinguishes from unseal_data and likely encrypt_data. It is specific and unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies this tool is for TEE-specific sealing, but does not explicitly tell when to use it over similar tools like encrypt_data or other encryption methods. No alternatives or exclusions are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_agent_templatesAInspect

Search agent templates on the Tenzro Network marketplace by query. Matches against template name, description, and tags using case-insensitive substring search.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesSearch query to match against template name, description, and tags
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description bears full burden. It reveals the matching method (case-insensitive substring search) but omits details like pagination, result limits, or ordering. Some behavioral transparency is present, but more would be helpful.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, compact and front-loaded with key information. Every sentence adds value: the first defines the action and resource, the second details matching behavior. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple search tool with one parameter and no output schema, the description covers the essential behavior. It could mention the return format (e.g., list of template objects or IDs) to improve completeness, but it is still adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with one parameter fully described. The description adds no extra meaning beyond the schema's parameter description. Baseline 3 applies as the schema already provides sufficient semantic information.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (search) and resource (agent templates), specifies matching fields (name, description, tags), and case-insensitive substring search. This distinguishes it from sibling tools like list_agent_templates which likely returns all templates without filtering.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for searching by query but does not explicitly compare to sibling tools like list_agent_templates or get_agent_template. No guidance on when not to use or alternatives is provided, leaving the agent to infer from context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

segmentAInspect

Run prompt-driven image segmentation. prompts is an array of {type:'point', x, y, label} (label=1 foreground / 0 background) or {type:'box', x0, y0, x1, y1} objects. Returns one mask per prompt with confidence scores.

ParametersJSON Schema
NameRequiredDescriptionDefault
promptsYesPrompts: array of `{type:'point', x, y, label}` or `{type:'box', x0, y0, x1, y1}` objects
model_idYesRegistered segmenter id (e.g. 'sam3', 'sam2-base', 'edgesam', 'mobilesam')
image_base64YesBase64-encoded image bytes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the transparency burden. It discloses input format and return type but lacks details on side effects, authentication, rate limits, or limitations (e.g., max image size). It is adequate but not exhaustive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose, no fluff. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description covers return values. It explains input format and purpose. Missing details on error cases or image format restrictions, but for a straightforward tool it is largely complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, providing a baseline of 3. The description adds value by explaining prompt object structure and label meaning, and explicitly states the return format (masks with confidence scores), which is absent in the schema due to no output schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Run prompt-driven image segmentation,' specifies the prompt format, and indicates return type (mask per prompt with confidence scores). This distinguishes it from sibling tools like list_segmentation_models.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide explicit guidance on when to use this tool vs alternatives, nor does it mention when not to use it. Usage is implied for segmentation tasks, but no exclusions or alternative references are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

send_agent_messageAInspect

Send a message from one agent to another via the A2A protocol. Supports task, response, status, and error message types. Returns message delivery confirmation.

ParametersJSON Schema
NameRequiredDescriptionDefault
to_didYesDID of the recipient agent
payloadYesJSON payload of the message
from_didYesDID of the sender agent
message_typeYesMessage type: 'task', 'response', 'status', 'error'
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It mentions returning a delivery confirmation but does not describe the confirmation format or behavioral details like synchrony, persistence, or error handling. Only the message types are listed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no redundant words. First sentence states the action and protocol; second adds types and return value. Front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the 4 required parameters and no output schema, the description covers the basic purpose and types but lacks details on payload structure, prerequisites, and error handling. Adequate but not thorough.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for all parameters. The description adds no new meaning beyond the schema; it merely reiterates the message type enum. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool sends a message from one agent to another via the A2A protocol, and lists the supported message types. This distinguishes it from sibling tools like discover_agents or spawn_agent, which focus on agent discovery and creation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when needing to send agent-to-agent messages but does not explicitly state when to use it or when to consider alternatives. No exclusions or prerequisites (e.g., DIDs) are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

send_transactionAInspect

Send a TNZO transfer transaction on the Tenzro ledger. Two supported paths: (a) ambient OAuth/DPoP — omit signature/public_key/timestamp; the server will look up the wallet bound to the bearer DID and sign; (b) pre-signed — supply signature + public_key + timestamp matching the signed Transaction::hash(). The legacy private_key inline-signing path has been removed.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesHex-encoded recipient address
fromYesHex-encoded sender address
nonceNoTransaction nonce (default 0)
amountYesAmount in TNZO base units
chain_idNoChain ID (default 1337)
gas_limitNoGas limit (default 21000)
gas_priceNoGas price in wei (default 1 Gwei)
signatureNoEither (a) omit all of signature/public_key/timestamp and rely on ambient OAuth/DPoP auth — the server will look up the wallet bound to the bearer DID and sign on its behalf — or (b) supply all three for a pre-signed transaction.
timestampNoTransaction timestamp in ms since Unix epoch — MUST match the timestamp used when computing the signed hash (required if pre-signing)
public_keyNoHex-encoded 32-byte Ed25519 public key (required if pre-signing)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral traits. It reveals that for the OAuth path, the server looks up the wallet bound to the bearer DID and signs on behalf, which is a key behavioral detail. However, it omits information about side effects (e.g., whether the transaction is submitted immediately), potential errors, idempotency, or success/failure responses. Some behavior is implied but not fully disclosed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, consisting of three sentences that immediately state the tool's purpose and then detail the two supported paths. It is front-loaded with the action and resource, and every sentence provides essential guidance without repetition or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 10 parameters (all documented in schema), no output schema, and the complexity of two authentication paths, the description covers the key behavioral choice but lacks details on return values (e.g., transaction hash, status), error conditions, or network requirements. It is adequate for an agent to understand the main operation but insufficient for full autonomous invocation without additional knowledge.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage, so each parameter has a description. The description adds value by grouping 'signature', 'public_key', and 'timestamp' as a set and explaining the two authentication modes together. This contextual interplay is not apparent from individual schema descriptions alone. It also clarifies the 'nonce' and 'gas' defaults implicitly via the schema, but the description reinforces the auth flow. The legacy private_key removal is also noted, which is helpful for agents familiar with older versions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool sends a TNZO transfer transaction on the Tenzro ledger, specifying verb ('send') and resource ('transaction'). It also outlines two distinct authentication paths, which adds specificity. However, it does not explicitly differentiate this tool from sibling tools like 'sponsor_transaction' or 'cross_vm_transfer', which could be alternatives in some contexts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly provides two usage paths: ambient OAuth/DPoP (omit signature/public_key/timestamp) and pre-signed (supply all three). It also notes that the legacy private_key path has been removed. This gives clear context on when to use each mode. However, it does not advise when to use this tool over other transaction siblings, nor does it list any prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

serve_model_mcpAInspect

Start serving a downloaded model for inference. The model must be downloaded first. Returns the serving endpoint URL and configuration.

ParametersJSON Schema
NameRequiredDescriptionDefault
model_idYesModel ID to start serving (must be downloaded first)
max_concurrentNoOptional maximum number of concurrent inference requests
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full responsibility for behavioral disclosure. It mentions the return value but fails to describe side effects (e.g., starting a long-running service), state changes, error conditions, or concurrency implications. The description is too minimal for a tool that potentially starts a service.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences: the first covers purpose and prerequisite, the second covers return value. It is concise, front-loaded, and contains no unnecessary information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool that starts a service, the description lacks important context: it does not specify whether serving is asynchronous or blocking, what happens if the model is already serving, or details about the returned configuration. Given no output schema, more detail on return values is needed. Siblings like 'stop_model' imply a lifecycle, but this description does not explain the transition.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (both parameters have descriptions in the schema). The tool description does not add extra meaning beyond the schema; it only restates that it returns endpoint URL and configuration, which does not enhance parameter understanding. Baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Start serving a downloaded model for inference'), the prerequisite ('model must be downloaded first'), and the output ('Returns the serving endpoint URL and configuration'). It distinguishes itself from sibling tools like 'download_model' and 'stop_model' by focusing on the serving step.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states the prerequisite (model must be downloaded first), which is a key usage guideline. However, it does not mention when not to use the tool or provide alternatives, but the sibling tools do not offer a direct alternative for serving a downloaded model, so the context is clear enough.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

set_delegation_scopeBInspect

Set the delegation scope for a machine identity, defining spending limits, allowed operations, payment protocols, and chains the agent may use

ParametersJSON Schema
NameRequiredDescriptionDefault
machine_didYesMachine DID to set delegation scope for
allowed_chainsNoAllowed chains (e.g. ['tenzro', 'base', 'ethereum'])
max_daily_spendNoMaximum daily spend across all transactions
allowed_operationsNoAllowed operations (e.g. ['InferenceRequest', 'Transfer', 'Stake'])
max_transaction_valueNoMaximum transaction value (in smallest unit, e.g. 10000000 = $10 USDC)
allowed_payment_protocolsNoAllowed payment protocols (e.g. ['mpp', 'x402', 'native'])
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description should disclose behavioral traits. It does not mention that settings are overwritten, requires authorization, or effect on existing operations. Vague on side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with action, no fluff. Efficiently communicates purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, so description should cover return value or confirmation. Does not. Also lacks prerequisites or post-conditions for a mutation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, baseline 3. Description adds marginal value by listing parameter categories, but does not explain format or nuances beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'Set' and specific resource 'delegation scope for a machine identity', listing what it defines (spending limits, operations, etc.), distinguishing it from siblings like 'set_spending_limits'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when/why to use this tool vs alternatives. Does not differentiate from similar tools like 'set_spending_limits' or mention when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

set_provider_pricingBInspect

Set the pricing configuration for a provider node. Define TNZO price per 1k tokens and minimum charge. Returns the updated pricing config.

ParametersJSON Schema
NameRequiredDescriptionDefault
min_charge_tnzoNoOptional minimum charge per request in TNZO
provider_addressYesHex-encoded provider address
price_per_1k_tokensYesPrice per 1000 inference tokens in TNZO (e.g. 0.001)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Does not disclose potential side effects, authorization requirements, or error conditions. Only mentions return value, lacking important behavioral context for a write operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with 32 words, front-loaded with the action. No fluff or redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Mentions return value but lacks prerequisites (e.g., ownership of provider), failure modes, or relationship to sibling tools. Without output schema, description should set clearer expectations about the response.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and each parameter already has a description. The description simply paraphrases the schema ('Define TNZO price per 1k tokens and minimum charge') without adding new meaning, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states verb 'Set', resource 'pricing configuration for a provider node', and actions 'Define TNZO price per 1k tokens and minimum charge'. Distinguishes from sibling 'get_provider_pricing' by being the write counterpart.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives like 'get_provider_pricing' or 'set_provider_schedule'. Does not state prerequisites or scenarios where this tool is appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

set_provider_scheduleAInspect

Set the availability schedule for a provider node. Define which hours and days the node will accept new inference requests. Returns the updated schedule.

ParametersJSON Schema
NameRequiredDescriptionDefault
scheduleYesAvailability schedule as JSON (e.g. {"timezone": "UTC", "hours": "0-23", "days": "mon-sun"})
provider_addressYesHex-encoded provider address
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description indicates mutation ('Set') and returns updated schedule, but lacks details on authorization requirements, rate limits, side effects (e.g., overwriting existing schedule), or error conditions. With no annotations, the burden is on the description, which provides only moderate transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no unnecessary words. Front-loaded with key action and result. Efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, description usefully notes return of updated schedule. However, lacks prerequisites (e.g., must be provider owner) and does not explain validation of schedule format beyond example. Mostly complete for a simple setter tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for both parameters. Description adds context referencing 'hours and days' for schedule and 'provider node' for address, but does not add significant new meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it sets availability schedule for a provider node, specifies it defines hours/days for inference requests, and indicates it returns the updated schedule. Distinguishes from siblings like 'set_provider_pricing' and 'get_provider_schedule'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use or when-not-to-use guidance, nor mention of prerequisites like ownership or permissions. Usage is implied but not explicitly guided against alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

set_spending_limitsBInspect

Set spending limits for a wallet. Defines the maximum daily spend and per-transaction limit in TNZO. Transactions exceeding these limits will be rejected.

ParametersJSON Schema
NameRequiredDescriptionDefault
wallet_idYesWallet ID (UUID)
daily_limitYesMaximum daily spend in TNZO (e.g. 1000.0)
per_tx_limitYesMaximum per-transaction amount in TNZO (e.g. 100.0)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose all behavioral traits. It only mentions that exceeding limits will be rejected, but lacks info on overwriting behavior, authorization needs, side effects, or return value (e.g., confirmation, error handling).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no extra words. Front-loaded with the purpose, then details. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, yet description omits return value (e.g., success indicator, new limits). Also lacks error conditions (e.g., invalid wallet ID, limits out of range). For a simple mutation tool, more completeness is expected.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with clear descriptions for all three parameters. The description adds minimal value beyond summarizing 'daily spend' and 'per-transaction limit', which is already implied by the param names and schema descriptions. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Set' and the resource 'spending limits for a wallet', specifying the two types of limits (daily and per-transaction). It distinguishes from sibling 'get_spending_limits' by explicitly being the setter.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide when to use this tool vs alternatives (e.g., modify vs. set for first time) or any prerequisites (e.g., wallet must exist, authorization required). No guidance on when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

settle_paymentAInspect

Execute an immediate settlement between two addresses on the Tenzro Network. Transfers TNZO from payer to payee for a specific service type. Returns the settlement receipt.

ParametersJSON Schema
NameRequiredDescriptionDefault
payeeYesHex-encoded payee address
payerYesHex-encoded payer address
amount_tnzoYesAmount in TNZO (e.g. 1.5)
reference_idNoOptional reference ID for this settlement
service_typeYesService type: 'inference', 'tee', 'storage', 'general'
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description must carry the burden. It clearly indicates a mutation (immediate settlement) and a return (receipt), but does not disclose potential side effects, idempotency, required permissions, error conditions, or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, front-loaded with the primary verb, no redundant or filler content. Each sentence adds distinct value: purpose, mechanism, output.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of annotations and output schema, the description covers the core behavior and result. However, it lacks details on response structure or error handling, which would improve completeness for an agent to fully understand the tool's behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already documents all parameters. The description adds context like 'immediate settlement' and 'service type', but does not provide additional meaning beyond what the schema already offers for each parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description specifies the exact action: execute an immediate settlement, transfer TNZO from payer to payee for a service type, and return a receipt. This clearly distinguishes it from sibling payment tools like 'close_payment_channel' or 'verify_payment'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies this is for direct, one-time settlements, but does not explicitly state when to use it versus alternatives like opening a payment channel or creating a payment challenge. No when-not or alternative guidance is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

set_usernameBInspect

Set a globally unique username for a DID on the Tenzro Network. Usernames provide human-readable aliases for DIDs (e.g. 'alice' instead of 'did:tenzro:human:uuid').

ParametersJSON Schema
NameRequiredDescriptionDefault
didYesDID to associate the username with (e.g. did:tenzro:human:uuid)
usernameYesGlobally unique username to register (alphanumeric, underscores, hyphens)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description does not disclose behavioral traits such as whether the operation is idempotent, what happens on duplicate username, or permission requirements. The action is implied but not detailed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with the action, and no unnecessary fluff. Every word serves a purpose, making it efficient for an AI agent to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple set operation, the description covers core purpose but lacks details about return values, error handling (e.g., username taken), or any side effects. The absence of an output schema increases the need for such context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already describes both parameters with 100% coverage. The description adds context by explaining the purpose of usernames but does not introduce new constraints or clarifications beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Set' and the resource 'username for a DID', and explains the benefit of human-readable aliases with an example. It inherently distinguishes from the sibling 'resolve_username' by focusing on creation rather than lookup.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool vs. alternatives like 'resolve_username' or potential update scenarios. The description implies a use case but lacks context on prerequisites or exclusivity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sign_messageAInspect

Sign a message with a private key. Returns the hex-encoded signature. Supports Ed25519 and Secp256k1 key types.

ParametersJSON Schema
NameRequiredDescriptionDefault
key_typeNoKey type: 'ed25519' (default) or 'secp256k1'
message_hexYesHex-encoded message bytes to sign
private_keyYesHex-encoded private key (with or without 0x prefix)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It mentions the return format and key type support but omits details like error conditions, key length requirements, or security implications of handling private keys.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two short sentences, no redundancy, front-loaded with essential info. Every sentence is necessary.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Fairly complete for a cryptographic signing tool: describes input, output, and key types. No output schema exists, so description compensates by stating return format. Lacks minor context like default behavior of optional parameter.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage, so baseline is 3. The description adds 'Returns the hex-encoded signature' but does not provide further parameter context beyond what the schema already offers.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the action (signing a message with a private key), specifies the output format (hex-encoded signature), and lists supported key types, distinguishing it from siblings like verify_signature.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool vs alternatives (e.g., derive_key, verify_signature). However, the purpose is straightforward and implied usage is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

spawn_agentAInspect

Spawn a child agent under a parent agent on the Tenzro Network. The child inherits the parent's controller DID and can be delegated tasks. Maximum 50 children per parent agent.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesName for the new child agent
parent_idYesParent agent ID (UUID)
capabilitiesNoAgent capability strings (e.g. nlp, vision, code)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It reveals the inheritance behavior and the max limit, but omits details like what happens on exceeding the limit, whether spawning is synchronous, or how to reference the child agent later. The description moderately covers behavioral traits but has gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences that front-load the main action and efficiently convey key info (inheritance, delegation, limit). Every sentence adds value with no redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool lacks an output schema, yet the description does not specify what is returned (e.g., the child agent ID or confirmation). It also does not cover error conditions, prerequisites, or authorization requirements. Given moderate complexity (3 params), this omission makes it incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage (all 3 parameters described). The description adds no new parameter details beyond the schema; it only mentions a global constraint. With high schema coverage, baseline is 3, and the description does not elevate it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('spawn') and the resource ('child agent under a parent agent'), and distinguishes from siblings like 'spawn_agent_from_template' by specifying inheritance of controller DID and a delegation capability. The maximum children constraint differentiates it from other agent creation tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides a usage context ('inherit... controller DID and can be delegated tasks') and a constraint ('Maximum 50 children per parent agent'), but does not explicitly state when to use this tool over alternatives (e.g., 'spawn_agent_from_template' or 'create_swarm') or when not to use it. No exclusions or contrast with siblings are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

spawn_agent_from_templateAInspect

Spawn an agent from a marketplace template on the Tenzro Network. Creates a new agent instance with its own identity, wallet, and capabilities based on the template definition.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesDisplay name for the spawned agent
template_idYesAgent template ID to spawn from
parent_machine_didNoOptional parent machine DID. When set, the spawned agent's delegation scope is the strict intersection of the parent's scope and the template's spec — the child can never be broader than its parent on any axis.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses creation of identity, wallet, and capabilities. Without annotations, more behavioral detail (e.g., permissions, costs, reversibility) would be beneficial. The parent_machine_did parameter description adds some transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no fluff. Could be slightly more structured but generally efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, so description should clarify return value (e.g., agent ID). Also missing prerequisites like wallet existence. Adequate but not complete for a creation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, baseline is 3. Description adds no extra meaning beyond schema for the three parameters; schema already describes them adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it spawns an agent from a marketplace template, creating a new instance with identity, wallet, and capabilities. Distinguishes from similar tools like spawn_agent (no template) and template management tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage when a template ID is available, but lacks explicit guidance on when to choose this over alternatives like spawn_agent. No when-not-to-use or prerequisite mention.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

stake_tokensAInspect

Stake TNZO tokens to participate as a validator, model provider, TEE provider, or storage provider. Staked tokens earn rewards and increase network weight.

ParametersJSON Schema
NameRequiredDescriptionDefault
amountYesAmount to stake in TNZO (e.g. 1000.0 for 1000 TNZO)
provider_typeYesProvider type to stake for: 'validator', 'model_provider', 'tee_provider', or 'storage_provider'
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, and the description does not disclose behavioral traits beyond the action and benefits. Missing information on permissions, reversibility, lock-up periods, or risks. For a financial transaction tool, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two succinct sentences, directly stating purpose and benefit. No unnecessary words. Front-loaded with key information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 2 parameters, no output schema, and no annotations, the description covers the basic action and benefits but lacks details on prerequisites, post-staking behavior, and potential risks. Adequate but not complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters adequately. The description adds no extra meaning beyond what the schema provides. Baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (stake), the resource (TNZO tokens), and the purpose (participate as a validator, model provider, TEE provider, or storage provider). It distinguishes from sibling tools like 'unstake_tokens'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage (to become a provider and earn rewards) but does not explicitly state when to use this tool vs alternatives or when not to use it. Lacks exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

stop_modelAInspect

Stop serving a model on this node. The model remains downloaded but no longer accepts inference requests. Returns the stop confirmation.

ParametersJSON Schema
NameRequiredDescriptionDefault
model_idYesModel ID to stop serving
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It explains the key behavior (stops accepting inference requests, returns confirmation) but omits details like handling of ongoing requests, authorization needs, or potential side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the core action, and contains no extraneous information. Every sentence serves a purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (single parameter, no output schema), the description covers the essential aspects: what it does, what happens to the model, and return value. Minor omissions (error handling, prerequisites) are acceptable for this scale.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the parameter description in the schema already explains 'Model ID to stop serving'. The tool description adds no new semantic information beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('stop serving') on the resource ('model on this node') and distinguishes it from similar operations like deletion (model remains downloaded) and serving start (sibling 'serve_model_mcp').

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the tool is for temporary deactivation (model remains downloaded), but it does not explicitly state when to use it vs. alternatives like 'delete_model_mcp' or 'unload_*' tools, nor does it mention prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

submit_daml_commandAInspect

Submit a DAML command to a Canton domain. Supports create, exercise, and create_and_exercise commands. Returns the transaction ID and updated contract state.

ParametersJSON Schema
NameRequiredDescriptionDefault
partyYesDAML party identifier (submitting party)
choiceNoChoice name for exercise commands
argumentsYesJSON-encoded arguments for the command
domain_idYesCanton domain ID to submit the command to
contract_idNoContract ID for exercise commands
template_idYesDAML template identifier (e.g. 'MyModule:MyTemplate')
command_typeYesDAML command type: 'create', 'exercise', 'create_and_exercise'
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses that the tool returns transaction ID and updated contract state, but does not mention destructive behavior, authentication needs, or idempotency. The disclosure is incomplete for a mutation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the verb and resource, and contains no extraneous information. Every sentence serves a purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 7 parameters and no output schema, the description covers basic purpose and return, but lacks details on prerequisites, error conditions, or how command types affect parameter requirements. It is adequate but not complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already documents all parameters. The description adds context about supported command types and return values, but does not elaborate on parameter usage or dependencies. The added value is moderate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Submit' and the resource 'DAML command to a Canton domain', specifies supported command types, and mentions the return value. This distinguishes it from sibling tools like 'list_daml_contracts' which are read-only.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for creating/exercising DAML commands but does not explicitly state when to use this tool versus alternatives, nor does it provide exclusions or prerequisites. The guidance is present but implicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

subscribe_eventsAInspect

Register an event filter for real-time streaming via WebSocket or gRPC. Returns a subscription ID and connection URLs for receiving matching events. Events matching the filter will be pushed to connected clients.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressesNoFilter by involved addresses (hex, optional)
event_typesNoEvent types to subscribe to: 'transfer', 'mint', 'burn', 'stake', 'bridge', 'settlement', 'nft_transfer', 'compliance'
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description partially discloses behavior: it registers a filter and returns URLs for streaming. However, it lacks details on idempotency, persistence, or any side effects. The description adds context beyond the schema but does not fully compensate for the missing annotations in terms of disclosing potential pitfalls or constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with three sentences, each serving a clear purpose: stating the action, describing the return, and summarizing the behavior. It is front-loaded and contains no redundant or extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 optional params, no output schema), the description covers the core purpose and return values. It omits details about subscribing without filters, connection lifecycle, or unsubscribe mechanism, but these are not critical for basic usage. The description is mostly complete for its scope.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the parameter meanings are already fully documented. The description does not add new semantic information beyond what is in the schema. It mentions 'event filter' generally but does not elaborate on how addresses and event_types interact or default behaviors.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('register an event filter') and the resource ('real-time streaming via WebSocket or gRPC'). It specifies the return (subscription ID and connection URLs) and distinguishes from sibling tools like 'get_events' which likely retrieve historical events. The verb+resource combination is specific and unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for real-time events but does not explicitly state when to use this tool versus alternatives like 'get_events' or other list tools. No when-not-to-use guidance or alternative comparisons are provided, leaving the agent to infer context from the tool name and description alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

terminate_agentAInspect

Terminate an agent (TERMINAL — irreversible). Halts messaging, optionally slashes stake (slash_bps 0-10000), and optionally cascades to all descendant spawned agents. Requires controller_did to match. NOTE: this MCP tool describes the operation only — the transaction must be signed and submitted via tenzro_signAndSendTransaction with type=TerminateAgent.

ParametersJSON Schema
NameRequiredDescriptionDefault
reasonYesFree-text reason recorded on the kill-switch receipt
cascadeNoIf true, recursively terminate all descendant agents in the spawn tree
agent_didYesDID of the agent to terminate (terminal state, optionally cascades to spawned children)
slash_bpsNoSlash basis points 0-10000 (10000 = 100%). Default 0.
evidence_hashNoOptional 32-byte evidence hash (hex-encoded, with or without 0x prefix)
controller_didYesDID of the controller authorizing the termination
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses the terminal and irreversible nature, optional slashing (with basis points range), optional cascading, and the need for proper authorization. It does not describe return values but is otherwise transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences plus a note, efficient and front-loaded with critical information. No extraneous content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of termination with optional slashing and cascading, the description covers essential behavioral points (irreversibility, messaging halt, authorization, note on transaction flow). Lacks details on data/bond handling but is sufficient for agent decision-making.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with all parameters described. The description reiterates the slash_bps range and cascade option, adding minimal value beyond the schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool terminates an agent irreversibly, halts messaging, and optionally slashes stake or cascades to descendants. This distinguishes it from siblings like pause_agent or quarantine_agent by emphasizing the terminal nature.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes the authorization requirement (controller_did must match) and notes that the tool only describes the operation, requiring signing via tenzro_signAndSendTransaction. While it doesn't explicitly contrast with pause or quarantine, the irreversibility warning provides strong usage guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

terminate_swarmAInspect

Terminate a Tenzro agent swarm and all its member agents. Attempts graceful shutdown of each member. Returns confirmation with swarm ID and terminated status.

ParametersJSON Schema
NameRequiredDescriptionDefault
swarm_idYesSwarm ID to terminate
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description carries full burden. It mentions 'attempts graceful shutdown' but omits details on failure behavior, side effects (e.g., data loss), permissions required, or whether the operation is synchronous or irreversible.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no filler. Immediately states purpose and key details. Efficient and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-param tool with no output schema, the description covers purpose, scope, and return value. However, it lacks behavioral details (e.g., error handling, partial failures) needed for full completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% for the single param 'swarm_id'. The description adds 'Swarm ID to terminate' but does not provide additional context beyond the schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'terminate' and the resource 'Tenzro agent swarm and all its member agents', distinguishing it from sibling tools like create_swarm or get_swarm_status. It specifies scope (all members) and return value.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no explicit guidance on when to use vs. alternatives (e.g., pausing, deactivating). It doesn't mention prerequisites like swarm existence or state requirements. Usage context is implied but not explicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

text_embedAInspect

Embed a batch of strings with a registered text encoder. Returns one row per input. Optional requested_dim enables Matryoshka truncation (e.g. 128/256/512 from a native 768/1024-dim model).

ParametersJSON Schema
NameRequiredDescriptionDefault
inputsYesStrings to embed. Non-empty.
model_idYesRegistered text-embedding model id (e.g. 'qwen3-embedding-0.6b', 'embeddinggemma-300m', 'bge-m3')
normalizeNoL2-normalize each row before returning (default false)
requested_dimNoMatryoshka truncation target dimension (e.g. 128/256/512). If omitted, returns the model-native dim.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided; description covers the embedding operation and optional Matryoshka truncation but does not disclose permission requirements, safety profile, or side effects. Adequate but not detailed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two clear, front-loaded sentences with no redundancy or unnecessary words. Efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema; description minimally addresses return format ('one row per input'). Explains requested_dim well. Adequate for a simple 4-parameter tool, though could mention normalization behavior or default.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers all 4 parameters (100% coverage). Description adds meaning by explaining output shape and Matryoshka truncation context for requested_dim, but adds limited value beyond schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specifically states the tool embeds a batch of strings using a registered text encoder and returns one row per input. Distinguishes from sibling embedding tools like vision_embed and video_embed.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description implies usage for text embedding tasks but does not explicitly contrast with sibling tools like list_text_embedding_models or provide when-to-use/when-not-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

token_balanceAInspect

Get the TNZO token balance for an address. Returns the balance in both atto-TNZO (raw) and TNZO (human-readable decimal).

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesHex-encoded address to query TNZO balance for
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided. The description discloses the return formats (atto and decimal) but does not mention side effects, authentication, or error cases. Adequate for a simple read operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no fluff, directly conveys the purpose and return formats. Highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description explains the return format (atto and decimal) which is helpful. It does not mention potential errors or prerequisites, but is complete for a simple balance query.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema covers the parameter fully (100% coverage) with a clear description. The tool description does not add any additional semantic value beyond the schema, so baseline 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it gets the TNZO token balance for an address, with specific verb 'Get' and resource. It distinguishes from siblings like 'get_balance' and 'get_token_balance' by specifying the exact token (TNZO) and return formats.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when to use vs alternatives. The name implies it's for TNZO balance, but there is no guidance on when to prefer this over related tools like 'get_balance' or 'get_token_balance'. A minimal viable score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

total_supplyAInspect

Get the total TNZO token supply. Returns total supply in both atto-TNZO and TNZO. Useful for inflation monitoring and economic analysis.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden of disclosure. While it implies a read-only query, it does not explicitly state that the tool is non-destructive, requires no authentication, or has rate limits. The description lacks behavioral context beyond the obvious.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise: two sentences that front-load the action and describe the return value and use case. No superfluous words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no parameters and no output schema, the description adequately covers what the tool returns (total supply in two units) and its relevance. It is sufficient for a simple query tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters, so the baseline is 4. The description correctly does not add parameter information, as none is needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves the total TNZO token supply and specifies the return format in atto-TNZO and TNZO. However, it does not explicitly differentiate from sibling tools like token_balance or get_token_info, which might also return token-related data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description suggests use cases ('inflation monitoring and economic analysis') but provides no explicit when-to-use or when-not-to-use guidance. No alternatives or exclusions are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

transcribeBInspect

Transcribe an audio clip (WAV/MP3/FLAC, base64-encoded) with a registered ASR model. Optional language hint, per-segment timestamps, and decoding temperature.

ParametersJSON Schema
NameRequiredDescriptionDefault
languageNoOptional ISO language hint (e.g. 'en', 'es', 'fr'). Many models auto-detect.
model_idYesRegistered ASR model id (e.g. 'whisper-large-v3-turbo', 'distil-whisper-small.en', 'moonshine-base-v2', 'parakeet-tdt-0.6b-v3', 'canary-1b-flash')
timestampsNoEmit per-segment / per-word timestamps in the result (default false)
temperatureNoDecoding temperature (default 0.0)
audio_base64YesBase64-encoded audio bytes (WAV/MP3/FLAC)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose all behavioral traits. It lists support for audio formats and optional features but omits critical details like output format, handling of long audio, error behavior, or performance characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that front-loads the core purpose and efficiently mentions optional features. No unnecessary words or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite 5 parameters and no output schema, the description does not mention the return format (e.g., transcribed text, segments). It lacks details on sync/async behavior, file size limits, and model selection guidance, making it incomplete for informed agent invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds no new meaning beyond the schema's parameter descriptions; it merely restates the optional parameters. It does not enhance understanding of parameter usage beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly specifies the tool's action ('transcribe'), the input type (audio clip with specific formats and encoding), and the required model. It is distinct from all sibling tools, as none other perform audio transcription.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description indicates when to use (for transcribing audio) but does not provide guidance on when not to use, nor does it mention alternative tools or preconditions. The optional parameters are noted without context on when they should be applied.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

transfer_nftBInspect

Transfer an NFT between addresses within a collection. Verifies the sender owns the token before transferring. Returns the updated ownership details.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesHex-encoded recipient address
fromYesHex-encoded sender address
amountNoAmount to transfer (only for ERC-1155, default 1)
token_idYesToken ID to transfer (numeric string)
collection_idYesCollection ID (UUID)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It discloses ownership verification and return of updated details, but omits error handling (e.g., if sender does not own token), side effects (gas fees), or idempotency information.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with no unnecessary words, directly conveying the action, verification, and return value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations and output schema, the description covers the core function but leaves gaps: no mention of the optional amount parameter, no clarification of error conditions, and vague 'updated ownership details'. Adequate but not thorough.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds minimal value beyond the schema; it mentions 'collection' and 'addresses' but does not explain the optional 'amount' parameter for ERC-1155.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Transfer an NFT') and the resource ('between addresses within a collection'), distinguishing it from related tools like mint_nft and get_nft_info.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives. The description mentions ownership verification but does not contrast with other transfer operations or specify prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

unload_audio_modelBInspect

Unload a registered ASR model, freeing its ORT session.

ParametersJSON Schema
NameRequiredDescriptionDefault
model_idYesRegistered model id (the same id used at load time)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits. It mentions freeing an ORT session, implying memory management, but fails to specify prerequisites (e.g., model must be loaded), success/failure indications, or irreversibility.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single, well-structured sentence that conveys the core action and effect with no unnecessary words. It is appropriately sized for the tool's simplicity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, the description covers the basic action. However, it omits expected return behavior (e.g., success/failure) and does not compensate for missing annotations or output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear parameter description. The tool description adds no extra meaning beyond the schema, meeting the baseline expectation but not exceeding it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Unload a registered ASR model') and the effect ('freeing its ORT session'), distinguishing it from sibling unload tools by specifying the model type (ASR) and technical detail (ORT session).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives (e.g., other unload_model tools). The description only states what it does, leaving the agent to infer context from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

unload_detection_modelBInspect

Unload a registered detector, freeing its ORT session.

ParametersJSON Schema
NameRequiredDescriptionDefault
model_idYesRegistered model id (the same id used at load time)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden of behavioral disclosure. It mentions 'freeing its ORT session' but does not explain error states, idempotency, resource ownership, or side effects. Critical safety information is missing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single 8-word sentence with no filler. Every word is informative, and the structure is optimal for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (1 param, no output schema), the description covers the basic action but lacks context around prerequisites and error handling. For a mutation tool, this is adequate but not complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the schema description already explains the model_id parameter. The tool description adds no additional context or meaning beyond what the schema provides, so baseline score is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('unload') and the resource ('a registered detector'), along with the effect ('freeing its ORT session'). This effectively distinguishes it from siblings like list_detection_models or other unload_* tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives, such as prerequisites (e.g., model must be loaded) or conditions to avoid (e.g., calling while in use). There is no mention of related tools like 'load' or 'detect'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

unload_forecast_modelBInspect

Unload a registered forecast model from this node, freeing its ORT session.

ParametersJSON Schema
NameRequiredDescriptionDefault
model_idYesRegistered model id (the same id used at load time)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It mentions freeing ORT session but omits side effects (model no longer available for forecasts), error conditions, or required permissions. Minimal disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, directly states the action and effect, no redundancy or filler. Front-loaded with the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple one-parameter tool, but lacks context on usage prerequisites, behavioral consequences (e.g., need to reload before forecasting again), and error handling. Meets minimum viability.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear description for model_id. The tool description does not add further meaning beyond what the schema already provides, meeting baseline but not exceeding it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Unload a registered forecast model from this node, freeing its ORT session' uses a specific verb (unload), resource (registered forecast model), and effect, distinguishing it from sibling unload tools for other model types.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool, prerequisites (model must be loaded), or alternatives. The description is purely declarative without contextual usage cues.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

unload_segmentation_modelAInspect

Unload a registered segmenter, freeing its ORT session.

ParametersJSON Schema
NameRequiredDescriptionDefault
model_idYesRegistered model id (the same id used at load time)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses that it frees an ORT session, a behavioral detail. However, it does not mention idempotency, side effects, or safety considerations. No annotations are present, so the description carries the full burden.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence of 9 words with no wasted text. It efficiently conveys the core action and effect.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter, the description covers the purpose and effect adequately. It does not mention the return value or confirm success, but given the simplicity, this is a minor gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema provides a thorough description of the model_id parameter (100% coverage). The tool description adds no additional meaning beyond what the schema already states.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action 'unload' and the resource 'registered segmenter', and specifies the effect 'freeing its ORT session'. This distinguishes it from siblings like unload_audio_model or unload_detection_model.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus other unload tools or on the context of unloading. It does not mention prerequisites or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

unload_text_embedding_modelBInspect

Unload a registered text-embedding model, freeing its ORT session.

ParametersJSON Schema
NameRequiredDescriptionDefault
model_idYesRegistered model id (the same id used at load time)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must fully disclose behavior. While it mentions freeing the ORT session, it does not specify if the operation is idempotent, whether it blocks, or what happens if the model is not currently loaded.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single sentence conveys the essential purpose and effect with no wasted words. The description is appropriately concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no output schema, no annotations), the description covers the basic action but lacks contextual details such as prerequisites, error handling, or state assumptions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the schema describes model_id adequately. The description adds no additional meaning beyond the schema, resulting in a baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('unload'), the resource ('text-embedding model'), and the consequence ('freeing its ORT session'). It distinctly differentiates from sibling tools like unload_audio_model by specifying 'text-embedding'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives, or when not to use it. There is no mention of prerequisites (e.g., the model must be loaded) or error states (e.g., model not found).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

unload_video_modelAInspect

Unload a registered video encoder, freeing its ORT session.

ParametersJSON Schema
NameRequiredDescriptionDefault
model_idYesRegistered model id (the same id used at load time)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description mentions 'freeing its ORT session', implying memory deallocation, but does not disclose potential side effects like whether the action is reversible, or if concurrent usage is affected. Without annotations, the burden is on the description; basic transparency but lacks depth.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no unnecessary words, front-loaded with the core action. Highly concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with no output schema, the description covers the main purpose and effect. Could mention that the model must be registered/loaded first, but overall adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the parameter description is clear. The tool description adds no additional meaning beyond 'Registered model id (the same id used at load time)'. Baseline 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the verb 'Unload' and the resource 'registered video encoder', and explains the effect 'freeing its ORT session'. This distinguishes it from sibling unload tools for other model types.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives like unload_audio_model, nor prerequisites such as requiring the model to be loaded first. Missing context on appropriate usage scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

unload_vision_modelBInspect

Unload a registered vision encoder, freeing its ORT session.

ParametersJSON Schema
NameRequiredDescriptionDefault
model_idYesRegistered model id (the same id used at load time)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose all behavioral traits. 'Freeing its ORT session' indicates memory release but does not state that the model becomes unavailable for inference, that unloading is irreversible without reloading, or any potential side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single, clear sentence with no unnecessary words. It conveys the action and resource succinctly, earning its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While the tool is simple (one param, no output schema), the description omits critical context: the model must have been previously loaded, and unloading makes it unavailable. Without annotations, these gaps reduce completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The single parameter 'model_id' is documented both in the schema and description, which adds 'the same id used at load time' – clarifying the expected value. This provides useful context beyond the bare parameter name.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it unloads a vision encoder and frees its ORT session. The verb 'unload' and specific resource 'vision encoder' differentiate from sibling tools like unload_audio_model. It is not a tautology.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives. It does not mention prerequisites (e.g., model must be loaded) or exclusions (e.g., avoid if model is in use). The differentiation relies solely on the tool name.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

unseal_dataAInspect

Unseal (decrypt) data that was previously sealed within the TEE enclave. Requires the same key_id used during sealing.

ParametersJSON Schema
NameRequiredDescriptionDefault
key_idYesKey ID used during sealing
sealed_hexYesHex-encoded sealed data to unseal (decrypt)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, placing the full burden on the description. The description does not disclose behavioral traits such as side effects, error behavior (e.g., wrong key), permission requirements, or whether the operation is reversible. For a security-sensitive operation, this is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences, no unnecessary words. It efficiently conveys the core purpose and a key requirement. Slightly better than average due to brevity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema and no annotations, the description should elaborate on return values, error conditions, and security implications. It does not explain what happens on success or failure, nor does it differentiate from similar tools like decrypt_data. The mention of 'TEE enclave' provides some context but is insufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with adequate descriptions for both parameters. The description adds context that the data was previously sealed within the TEE enclave, which is not in the schema. However, it mostly restates the parameter descriptions, providing minimal additional meaning.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool decrypts data that was previously sealed in a TEE enclave, using a specific verb ('unseal/decrypt') and resource ('data'). It differentiates from sibling tools like seal_data by specifying the decryption action and the required key_id.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly requires the same key_id used during sealing, providing a clear prerequisite. However, it does not mention when to avoid this tool or suggest alternatives like general decryption tools. It implies complementary use with seal_data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

unstake_tokensAInspect

Unstake TNZO tokens and begin the unbonding period. After the unbonding period (typically 7 days), tokens can be withdrawn.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesHex-encoded staker address (with or without 0x prefix)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses the unbonding period (typically 7 days) and the eventual withdrawal, but does not discuss potential fees, irreversibility, or impact on rewards.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with no extraneous information. Front-loaded with the action and resource.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple one-parameter tool with no annotations or output schema, the description covers the core behavior but lacks details on prerequisites, confirmation, or return values.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds no additional meaning beyond the schema; the parameter 'address' is explained in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the action 'Unstake TNZO tokens' and mentions the unbonding period. Distinguishes itself from sibling 'stake_tokens' by explicitly describing the inverse operation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Indicates the unbonding period and that tokens can be withdrawn afterward, providing context for when this tool is used. However, it does not mention when not to use it or suggest alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

update_agent_templateAInspect

Update metadata for an agent template you own. Change description, version, status, or tags. Returns the updated template record.

ParametersJSON Schema
NameRequiredDescriptionDefault
tagsNoUpdated tags list
statusNoNew status: 'active', 'inactive', 'deprecated'
versionNoNew version string (e.g. '1.1.0')
descriptionNoNew description for the template
template_idYesAgent template ID to update
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It correctly indicates a write operation (update) and states the return value, but does not disclose authentication needs, side effects, or rate limits. This is minimal but not misleading.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with no extraneous information. It is front-loaded with the main purpose and efficiently communicates the key aspects.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple update tool, the description covers the purpose, updatable fields, and return value. Ownership is mentioned. However, it could note that only provided fields are updated. Overall, fairly complete given no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so the baseline is 3. The description lists the updatable fields (description, version, status, tags), adding slight meaning but no additional format or constraints beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states 'Update metadata for an agent template you own' with specific fields (description, version, status, tags) and return of updated record. This clearly distinguishes from sibling tools like register_agent_template or get_agent_template.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for templates the user owns, providing clear context. However, it lacks explicit when-to-use or when-not-to-use guidance and does not mention alternative tools for other operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

verify_paymentCInspect

Verify a payment credential against a previously created challenge and settle the payment on-chain

ParametersJSON Schema
NameRequiredDescriptionDefault
assetYesPayment asset: 'USDC', 'USDT', or 'TNZO'
amountYesPayment amount in smallest unit
protocolYesPayment protocol used: 'mpp', 'x402', or 'native'
payer_didYesPayer DID (e.g. did:tenzro:human:uuid or did:tenzro:machine:controller:uuid)
signatureYesHex-encoded classical (Ed25519) signature proving payment
challenge_idYesChallenge ID from the payment challenge
pq_signatureYesHex-encoded post-quantum (ML-DSA-65, FIPS 204) signature — 3309 bytes — required for internal Tenzro mpp/x402/native protocols; pass empty string for external passthroughs (visa-tap, mastercard-agent-pay) that don't yet have a hybrid signing scheme
payer_addressYesPayer wallet address (hex-encoded)
pq_public_keyYesHex-encoded ML-DSA-65 verifying key — 1952 bytes — required for internal Tenzro mpp/x402/native protocols; pass empty string for external passthroughs
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It states that the tool both verifies and settles, implying state changes, but does not mention required permissions, failure modes, or side effects. For a payment tool, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single clear sentence with no wasted words. However, it could be slightly more structured (e.g., listing preconditions or postconditions) to aid readability for a complex tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite the complexity (9 required params, no output schema, no annotations), the description is very sparse. It does not explain the overall workflow, return value, error conditions, or how the tool fits with the sibling 'create_payment_challenge'.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema covers 100% of the 9 parameters with descriptions. The tool description adds no additional meaning beyond the schema. With high schema coverage, a baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses clear verbs ('verify' and 'settle') and specifies the resource ('payment credential' and 'challenge'). It distinguishes from siblings like 'create_payment_challenge' and 'settle_payment' by combining verification with settlement, though the difference from 'settle_payment' alone is ambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives (e.g., 'settle_payment' or 'create_payment_challenge'). The description implies a prerequisite of a previous challenge but does not state it explicitly or provide any contextual conditions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

verify_signatureAInspect

Verify a cryptographic signature against a public key and message. Returns whether the signature is valid.

ParametersJSON Schema
NameRequiredDescriptionDefault
public_keyYesHex-encoded public key
message_hexYesHex-encoded message bytes that were signed
signature_hexYesHex-encoded signature
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It states the operation and return value but does not disclose additional behaviors like error handling, performance, or side effects, though for a simple crypto operation it is minimally adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

One sentence, front-loaded, no waste. Every word adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description explains the return value and the schema covers inputs. However, missing explicit mention of cryptographic algorithm or key type, but still fairly complete for a simple verification tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% and the description adds no extra meaning beyond what the schema already provides. Baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Verify a cryptographic signature against a public key and message' and specifies the return value. This distinguishes it from sibling tools like verify_vrf_proof or verify_zk_proof.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives. Does not mention prerequisites or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

verify_tee_attestationAInspect

Verify a TEE attestation report. Checks the vendor certificate chain, signature, and measurement hashes. Returns verification status and details.

ParametersJSON Schema
NameRequiredDescriptionDefault
tee_typeYesTEE type: 'tdx', 'sev-snp', 'nitro', 'nvidia-gpu'
attestationYesHex-encoded attestation report
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description discloses the verification steps (vendor certificate chain, signature, measurement hashes) and return type (status and details), providing good insight into behavior. Could mention error handling or side effects, but overall adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is two sentences, concise, and front-loaded with the main purpose. Every sentence adds value with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, and the description only vaguely mentions 'Returns verification status and details.' It could provide more structure or examples of the return value. For a verification tool, the description is adequate but not thorough.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with descriptions for both parameters (tee_type, attestation). The description adds no new parameter information beyond what the schema provides, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool verifies a TEE attestation report, specifying the checks (certificate chain, signature, measurement hashes) and that it returns status and details. This distinguishes it from sibling tools like 'get_tee_attestation' and other verify tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description implies usage (when you have an attestation report), but does not explicitly state when to use this tool over alternatives like 'get_tee_attestation' or 'verify_signature'. No guidance on prerequisites or context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

verify_vrf_proofAInspect

Verify a Tenzro VRF proof (RFC 9381 ECVRF-EDWARDS25519-SHA512-TAI). Returns the deterministic 64-byte VRF output if the proof is valid. Used for provably-fair NFT reveals, lotteries, and randomized trait assignment.

ParametersJSON Schema
NameRequiredDescriptionDefault
alphaYesHex-encoded input message (alpha). Public inputs like block hash, request ID, or NFT mint nonce
proofYesHex-encoded 80-byte VRF proof: Gamma(32) || c(16) || s(32) per RFC 9381
pubkeyYesHex-encoded 32-byte VRF public key (Edwards25519 compressed point)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It states the return value on success but does not mention failure behavior (e.g., error on invalid proof). The description is adequate but could be more transparent about error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: the first states the core action and cryptographic standard, the second lists use cases. No fluff, front-loaded, every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description explains the return value and use cases for a verification tool with no output schema. It does not cover error details or side effects, but it is reasonably complete for its purpose. Slightly better than minimal due to use case context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with detailed parameter descriptions (e.g., hex-encoded, byte lengths). The tool description adds no additional parameter information beyond the schema, so a baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it verifies a specific VRF proof (RFC 9381 ECVRF-EDWARDS25519-SHA512-TAI) and returns the deterministic 64-byte output. It distinguishes from siblings like generate_vrf_proof and verify_zk_proof by specifying the cryptographic standard and use cases.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit use cases (NFT reveals, lotteries, randomized trait assignment) implying when to use. However, it does not explicitly state when not to use or mention alternatives like generate_vrf_proof, which is a sibling.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

verify_zk_proofCInspect

Verify a Plonky3 STARK proof (over KoalaBear) on the Tenzro ledger. Requires a circuit_id ('inference', 'settlement', or 'identity').

ParametersJSON Schema
NameRequiredDescriptionDefault
proofYesHex-encoded proof bytes — bincode-encoded p3_uni_stark::Proof
circuit_idYesCircuit identifier — one of 'inference', 'settlement', 'identity'
public_inputsYesPublic inputs as JSON array of hex strings (4-byte LE KoalaBear chunks)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description bears the full burden. It only says 'Verify' without disclosing side effects, authentication needs, rate limits, or what happens upon success/failure. The lack of behavioral detail is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with the main verb and resource, no wasted words. Efficient and structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite the tool's complexity, the description omits return value, error conditions, and any post-conditions. Without an output schema or annotations, the agent lacks critical context for invoking this tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and each parameter is described in the schema. The description adds the explicit enum values for circuit_id, but does not provide additional semantics beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it verifies a Plonky3 STARK proof over KoalaBear on the Tenzro ledger, with explicit circuit_id values. It distinguishes from sibling verification tools by specifying the proof type, though it does not name alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions a required parameter (circuit_id) but provides no guidance on when to use this tool versus other verification tools like verify_signature or verify_vrf_proof. No when-not or alternative context is given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

video_embedAInspect

Produce a clip-level embedding from a short video (base64-encoded). Wave 1: agents can fall back to per-frame vision_embed pooling until a native video encoder lands.

ParametersJSON Schema
NameRequiredDescriptionDefault
model_idYesRegistered video encoder id. Wave 1 catalog ships empty pending license clearance + ONNX export.
normalizeNoL2-normalize the pooled embedding (default false)
frame_strideNoSub-sample frames at this stride (default model-defined)
video_base64YesBase64-encoded video bytes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, and the description does not disclose important behavioral traits such as processing limits, latency, error handling, or the exact nature of the output. Only briefly mentions 'short video' without quantification, leaving significant gaps in understanding the tool's behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no fluff. The first sentence states the main purpose, and the second provides essential usage context. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers basic input and output concept but lacks details on output format (e.g., embedding dimension, type), constraints (max video length, supported codecs), and error scenarios. Without an output schema, more description is needed for completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

All four parameters have descriptions in the schema (100% coverage). The tool description repeats that the video is base64-encoded but does not add any new semantic meaning beyond the schema. Thus, it meets the baseline but does not enhance understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the verb 'produce' and resource 'clip-level embedding', with input specification (short video, base64-encoded). Distinguishes from sibling tools like vision_embed by mentioning the fallback strategy in Wave 1, making the tool's purpose specific and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly provides when to use this tool (native video encoder) vs. the alternative (fall back to per-frame vision_embed pooling). The second sentence directly addresses usage context and alternatives, offering clear guidance on tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

vision_embedAInspect

Embed a single image with a registered vision encoder. Pass base64-encoded PNG/JPEG/WebP bytes. Returns a dense feature vector (e.g. 768-dim for DINOv3-base, 1152-dim for SigLIP2-so400m).

ParametersJSON Schema
NameRequiredDescriptionDefault
model_idYesRegistered vision encoder id (DINOv3, SigLIP2, CLIP, etc.)
normalizeNoL2-normalize the embedding before returning it (default false)
image_base64YesBase64-encoded image bytes (PNG/JPEG/WebP)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description details the core behavior (embedding a single image) and the return format, but it does not disclose potential behavioral traits such as rate limits, authentication needs, or any side effects. However, as a read-like operation, this is adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two sentences that are front-loaded with the action and essential details. Every sentence adds value without redundancy or unnecessary information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description compensates by explicitly stating the return type and providing example dimensions. It covers input and output well, though it could mention potential limitations like image size constraints.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage, and the description adds value by clarifying that 'image_base64' expects base64-encoded bytes of specific formats and providing example encoder IDs for 'model_id' (DINOv3, SigLIP2, CLIP). The 'normalize' parameter is not mentioned, but schema covers it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'embed', the resource 'single image', and specifies input format (base64-encoded PNG/JPEG/WebP) and output type (dense feature vector) with example dimensions, making it highly specific and distinguishable from siblings like text_embed or vision_similarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description implies usage for embedding images, it does not explicitly state when to use this tool over alternatives like vision_similarity or text_embed, nor does it provide any exclusion criteria or alternative tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

vision_similarityAInspect

Compute cosine similarity between an image embedding and a text embedding (must have matching dimension). Pure math — does not load any model. Typical use: pair vision_embed (image) with text_embed against a CLIP/SigLIP text tower.

ParametersJSON Schema
NameRequiredDescriptionDefault
text_embeddingYesText embedding (typically from text_embed against a CLIP/SigLIP text tower of matching dimension)
image_embeddingYesImage embedding (typically from vision_embed)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the burden. It discloses that the tool is pure math and does not load models, which is helpful. However, it does not mention error handling (e.g., dimension mismatch), return format, or performance characteristics. For a simple math function, this is adequate but not exceptional.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the main action. Every sentence provides essential information: the operation and the typical usage pattern. No redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool is simple with two array parameters and no output schema. The description covers purpose, inputs, and usage context. However, it does not describe the return value (likely a float), which would be helpful for completeness. Despite this, it is mostly adequate given the tool's simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Both parameters have schema descriptions that align with the tool's purpose. The description adds context: 'must have matching dimension' and 'typically from vision_embed and text_embed', which goes beyond the schema's plain descriptions. Since schema coverage is 100%, the added value is moderate but meaningful.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states 'Compute cosine similarity between an image embedding and a text embedding', which clearly identifies the operation and resources. It also specifies 'must have matching dimension' and 'Pure math — does not load any model', distinguishing it from model-loading tools like vision_embed or text_embed.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides a typical use case: 'pair vision_embed (image) with text_embed against a CLIP/SigLIP text tower.' It also notes that it is pure math and does not load models, implying it should be used when embeddings are already available. However, it does not explicitly state when not to use it or name alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

vote_on_proposalAInspect

Vote on an active Tenzro governance proposal. Cast yes, no, or abstain. Voting power is proportional to staked TNZO. Returns the recorded vote and current tally.

ParametersJSON Schema
NameRequiredDescriptionDefault
voteYesVote type: 'yes', 'no', or 'abstain'
proposal_idYesProposal ID to vote on
voter_addressYesHex-encoded voter address
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description bears full burden. Discloses return value (recorded vote and tally) and voting power mechanism (proportional to staked TNZO). However, does not mention side effects like irreversibility, gas costs, or authorization requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with clear front-loading: action first, then details, then return value. No filler or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, but description states return content. Lacks details on error cases (e.g., inactive proposal, insufficient power) and network context. Still sufficient for a straightforward vote operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage; all three parameters are described adequately. Description adds no extra meaning beyond schema, only reiterating vote options already listed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the action ('Vote on...') and resource ('active Tenzro governance proposal'), specifies vote options ('yes, no, or abstain'), and mentions voting power mechanism. Distinguishes from siblings like create_proposal or list_proposals.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage on active proposals but lacks explicit when-to-use or when-not-to-use guidance. No comparison to alternatives like delegate_voting_power or set_delegation_scope. Missing prerequisites such as staked TNZO or voter registration.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

wormhole_bridgeCInspect

Bridge tokens via the Wormhole adapter registered on the BridgeRouter. Returns transfer_id, tx_hash, fee_paid, and estimated_arrival_ms on success.

ParametersJSON Schema
NameRequiredDescriptionDefault
assetYesAsset symbol (e.g. TNZO, USDC)
amountYesAmount in smallest units as a u128 decimal string
senderYesSender address on source chain
recipientYesRecipient address on destination chain
dest_chainYesDestination chain name
source_chainYesSource chain name
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must compensate. It describes a write operation (bridging tokens) without disclosing any behavioral traits such as side effects, authorization requirements, rate limits, or failure modes. The return field listing is helpful but insufficient for full transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: one for action, one for returns. No fluff, every word serves a purpose. Efficient and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite 100% schema coverage and a clear return statement, the description omits critical context: supported chains, asset restrictions, prerequisites (e.g., allowance), error handling, and how to obtain the adapter address. An agent needs more to use this tool reliably.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with all 6 parameters having descriptions. The description does not add meaning beyond the schema—e.g., it doesn't explain valid chain names or asset symbols. Baseline is 3 due to high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool bridges tokens via the Wormhole adapter on the BridgeRouter and specifies return fields (transfer_id, tx_hash, fee_paid, estimated_arrival_ms). While it doesn't explicitly differentiate from sibling 'bridge_tokens', the name implies Wormhole-specific bridging, making the purpose clear enough.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives like bridge_tokens or crosschain_bridge. The description lacks any contextual hints, prerequisites, or exclusions, leaving the agent to infer usage from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

wormhole_chain_idAInspect

Look up the Wormhole numeric chain id for a chain name (ethereum=2, solana=1, base=30, arbitrum=23, optimism=24, etc.).

ParametersJSON Schema
NameRequiredDescriptionDefault
chainYesChain name (e.g. ethereum, solana, base, arbitrum, optimism)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, and description only states purpose without disclosing behavioral traits like determinism, external calls, or error handling. It adds minimal behavioral context beyond the operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

One sentence with examples, very concise and front-loaded with no wasted words. Appropriate for a simple lookup tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, and description does not specify return format or behavior for unknown chains. While functional, it could be more complete regarding what the tool returns.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with example values. The description adds inline examples but does not significantly extend meaning beyond the schema's parameter description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it looks up a Wormhole numeric chain ID for a given chain name, which is a specific verb+resource. It distinguishes from sibling tools like wormhole_bridge and wormhole_parse_vaa_id.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use vs alternatives. The purpose is clear, but it doesn't mention context like 'Use this when mapping chain names to IDs for Wormhole protocol operations'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

wormhole_parse_vaa_idAInspect

Parse a canonical Wormhole VAA id of the form {chain}/{emitter}/{sequence} into its components.

ParametersJSON Schema
NameRequiredDescriptionDefault
vaa_idYesCanonical VAA id in the form {chain}/{emitter}/{sequence}
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided. Description lacks details on validation, error handling, or output structure. Does not mention whether the tool validates the input format or what happens on malformed input.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, 16 words, with no extraneous information. Efficiently communicates the core function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple one-parameter tool with no output schema, the description is mostly adequate. Could be improved by explicitly listing the returned components (chain, emitter, sequence) rather than referencing them generically.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% for the single parameter. Description merely restates the schema field without adding new meaning. Baseline 3 applies as schema covers all details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool parses a VAA id into its components. Specifies the exact format {chain}/{emitter}/{sequence}. Distinct from sibling tools like wormhole_bridge or wormhole_chain_id.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives or conditions for use. The purpose is self-explanatory but could include that it is for decomposing VAA IDs obtained from other sources.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

wrap_tnzoCInspect

Wrap native TNZO to a specific VM representation (wTNZO ERC-20 on EVM, wTNZO SPL on SVM). In the pointer model, this is a no-op since all VMs share the same balance — the tool confirms the balance is accessible on the target VM.

ParametersJSON Schema
NameRequiredDescriptionDefault
to_vmYesTarget VM: 'evm', 'svm', or 'daml'
amountYesAmount of TNZO to wrap (as string, in native 18-decimal units)
addressYesAddress to wrap TNZO for (hex)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses that the tool is a no-op in the pointer model, confirming balance accessibility rather than performing an actual transfer. However, it does not mention side effects, authorization requirements, or other behavioral details, leaving gaps since no annotations are provided.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two sentences conveying essential information. It could be restructured to highlight the no-op distinction more prominently, but overall it is efficiently written.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the domain complexity (cross-VM operations) and the lack of output schema, the description provides adequate context about the tool's purpose and the no-op behavior. However, it does not fully explain how this tool differs from similar ones or provide comprehensive behavioral context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, providing descriptions for all three parameters. The description does not add new information beyond what the schema already provides, so the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool wraps native TNZO to a specific VM representation (wTNZO on EVM or SVM). It distinguishes the tool from others by mentioning the pointer model no-op behavior, but does not explicitly compare to siblings like cross_vm_transfer.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives (e.g., cross_vm_transfer, bridge_tokens). The description mentions the pointer model no-op but does not clarify usage context or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

x25519_key_exchangeAInspect

Perform an X25519 Diffie-Hellman key exchange. Returns the 32-byte shared secret in hex.

ParametersJSON Schema
NameRequiredDescriptionDefault
public_key_hexYesHex-encoded X25519 public key (32 bytes)
private_key_hexYesHex-encoded X25519 private key (32 bytes)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses the output format (32-byte shared secret in hex), which adds value beyond the schema. However, lacks details on error conditions, input validation, or side effects, given no annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, focused sentence communicating purpose and return value with no extraneous words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple, stateless cryptographic operation, the description adequately enables correct invocation. It specifies the operation and output format, though it omits potential constraints like required hex length.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers both parameters with descriptions (100% coverage), so the description offers no additional semantic information beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Perform an X25519 Diffie-Hellman key exchange' with specific verb and resource, and distinguishes from siblings like 'derive_key' and 'generate_keypair' by naming the algorithm and operation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives, nor any prerequisites or exclusions. The intended context is implied but not explicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.