Skip to main content
Glama

Agentry — The Trust Layer for the Agent Economy

Server Details

Agent registry with Nostr identity, reputation, escrow, observability, and Lightning payments.

Status
Unhealthy
Last Tested
Transport
Streamable HTTP
URL
Repository
cthulhutoo/agentry-mcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

36 tools
a2a_public_agent_card_api_agents_public__agent_id__getCInspect

A2A Public Agent Card

Return a single agent's A2A card by ID.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but offers minimal value. It lists HTTP 200 response and Content-Type, but fails to disclose error behaviors (e.g., 404 for invalid ID), authentication requirements, rate limits, or whether the operation is idempotent/safe.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the essential action statement, but includes a verbose '### Responses' section that appears to be auto-generated OpenAPI documentation. This HTTP-centric formatting adds limited value for an AI agent's tool selection decision and could be condensed.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, the description should explain what constitutes an 'A2A card' and what data it returns. It fails to distinguish this resource from a full agent record or describe the return value's structure, leaving significant gaps for an agent attempting to fulfill user requests.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 0% description coverage for the agent_id parameter. While the description implies the parameter via 'by ID,' it does not compensate for the schema gap by explaining the ID format, valid values, or where the agent_id originates (e.g., from list_agents_api_agents_get).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Return[s] a single agent's A2A card by ID,' distinguishing it from the sibling get_agent_api_agents__agent_id__get by specifying the 'A2A card' resource. However, it assumes the agent understands what an 'A2A card' is without explanation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus siblings like get_agent_api_agents__agent_id__get (which returns full agent data) or discover_single_agent_api_registry_discover__agent_id__post. The description lacks prerequisites, exclusion criteria, or selection logic.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

a2a_public_discovery_api_agents_public_getCInspect

A2A Public Discovery

A2A-compatible open discovery endpoint.

Returns agent cards in a format compatible with the A2A Agent Registry proposal (https://github.com/a2aproject/A2A/discussions/741).

Responses:

200: Successful Response (Success Response) Content-Type: application/json

ParametersJSON Schema
NameRequiredDescriptionDefault
qNoSearch query to filter agents
topNo
cursorNoPagination cursor
categoryNoFilter by category
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but offers minimal detail beyond the response format. It does not mention authentication requirements, rate limits, pagination behavior (despite having a cursor parameter), or whether the endpoint is read-only (implied but not stated). The HTTP 200 response detail is boilerplate that adds little value.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description uses markdown headers for structure, but includes a '### Responses' section detailing HTTP 200 status codes which is of limited utility for MCP tool selection. The content is moderately concise but contains some redundant formatting (repeating 'A2A Public Discovery' from the name).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The reference to the A2A Agent Registry proposal URL provides valuable standardization context. However, given the lack of output schema and annotations, the description could better explain what constitutes an 'agent card', expected result cardinality, or how this public discovery relates to the broader agent registry ecosystem.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 75% description coverage (top parameter lacks description). The description adds no parameter-specific context, examples, or formatting guidance beyond what the schema already provides. Since coverage is moderately high, it meets the baseline, but does not compensate for the missing 'top' parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description identifies the tool as an 'A2A-compatible open discovery endpoint' that 'Returns agent cards' in a specific format (A2A Agent Registry). However, it fails to distinguish this from similar sibling tools like 'search_agents_api_agents_search_get' or 'list_agents_api_agents_get', leaving ambiguity about when to prefer this over alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus the numerous sibling discovery/search tools (e.g., 'search_agents', 'list_agents', 'discover_single_agent'). There are no prerequisites, exclusion criteria, or workflow context to help an agent select this endpoint appropriately.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

agent_payment_summary_api_payments_ecash_summary__agent_id__getCInspect

Agent Payment Summary

Get payment summary for an agent — totals, balance, tx count.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses what data is returned (totals, balance, transaction count), but fails to mention safety characteristics (idempotency, read-only nature), authentication requirements, or rate limiting. The 'Responses' section is boilerplate and adds no behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is relatively brief, but the opening 'Agent Payment Summary' header restates the tool's name without adding value. The '### Responses' section contains redundant boilerplate ('Successful Response (Success Response)') that consumes space without aiding tool selection.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple single-parameter GET endpoint without output schema annotations, the description adequately explains the business purpose and return data structure. However, the complete omission of the 'agent_id' parameter documentation and lack of operational guidance leave notable gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage for the single required 'agent_id' parameter. The description mentions the tool operates 'for an agent' but never explicitly names the parameter, describes its format, or explains how to obtain a valid agent ID. Given the schema coverage gap, the description fails to compensate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves a 'payment summary for an agent' and explicitly lists what data is returned ('totals, balance, tx count'). This provides a specific verb and resource. However, it does not explicitly differentiate from the similar sibling tool 'get_payment_profile_api_payments_ecash_profile__agent_id__get'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'get_payment_profile' or 'list_transactions_api_payments_ecash_transactions_get'. There are no prerequisites, conditions, or exclusion criteria mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_quote_api_payments_ecash_mint_quote__quote_id__getCInspect

Check Quote

Check the payment status of a mint quote.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

ParametersJSON Schema
NameRequiredDescriptionDefault
mint_urlNohttps://mint.minibits.cash/Bitcoin
quote_idYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. While 'Check' implies a read-only operation, the description fails to disclose idempotency, possible quote states (pending/paid/expired), error behaviors, or whether the operation is safe to retry.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately brief but includes boilerplate HTTP response documentation ('### Responses: **200**: Successful Response') that provides limited value for agent tool selection. The structure is readable but not optimally front-loaded with actionable guidance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple two-parameter status check without output schema or annotations, the description meets minimum viability by stating the core purpose. However, it lacks critical context about the quote lifecycle and parameter semantics that would be necessary for an agent to use this tool correctly without trial and error.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, requiring the description to compensate for undocumented parameters. The description adds no meaning for 'quote_id' (which quote?) or 'mint_url' (why override the default?), leaving agents to infer parameter purposes solely from the schema titles.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Check[s] the payment status of a mint quote,' providing a specific verb and resource. However, it does not explicitly differentiate from sibling tools like 'create_quote_api_payments_ecash_mint_quote_post' or indicate this is a follow-up read operation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives (e.g., after creating a quote), nor are prerequisites mentioned. The description only states what the tool does, not the workflow context or when-not-to-use conditions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

claim_agent_api_agents_claim_postCInspect

Claim Agent

Developer claims ownership of an existing agent listing.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

ParametersJSON Schema
NameRequiredDescriptionDefault
nameNo
messageNo
agent_idYes
contact_emailYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Zero annotations provided, so description carries full burden. It identifies the mutation type (ownership claim) but omits critical behavioral details: approval workflows, irreversibility, verification emails, or response content. The HTTP 200 response documentation is boilerplate with no semantic value.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The core description is brief (one sentence), but wastes space on meaningless HTTP response boilerplate ('200: Successful Response') that adds no actionable information for the agent. Structure is front-loaded with redundant 'Claim Agent' header.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Inadequate for a 4-parameter mutation operation with zero schema documentation. High-stakes ownership transfer requires more context about side effects, return values, and parameter semantics than provided.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage (4 undocumented parameters). Description fails to compensate—it doesn't mention agent_id format, contact_email purpose (verification?), or whether name/message are required for the claim process. Critical gap for required fields.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the core action ('Developer claims ownership') and target resource ('existing agent listing'), distinguishing it from sibling register_agent which likely creates new agents. However, it lacks specifics on what 'claiming' entails (transfer vs. verification).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this versus register_agent, nor prerequisites for claiming (e.g., verification requirements, existing ownership status). No warnings about irreversibility or eligibility criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_melt_quote_api_payments_ecash_melt_quote_postBInspect

Create Melt Quote

Get a quote to melt ecash into a Lightning payment (NUT-05).

Used when an agent wants to pay a Lightning invoice using their ecash.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idYes
mint_urlNo
payment_requestYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. While it identifies the operation as a 'quote' (implying a preliminary, temporary step), it fails to disclose critical behavioral traits: whether the quote expires, if funds are reserved/locked during the quote lifecycle, idempotency characteristics, or rate limits. The NUT-05 reference adds technical context but doesn't explain runtime behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The core operational description is concise (three sentences). However, the inclusion of the 'Responses' section adds wasteful boilerplate ('200: Successful Response') that provides no actionable information beyond standard HTTP semantics, especially given the absence of an output schema to describe. The structure separates usage guidance from technical response details, but the latter adds no value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a financial tool involving cryptocurrency (ecash/Lightning) with three undocumented parameters and no output schema, the description is insufficient. It explains the high-level concept but omits parameter details, error scenarios, the quote lifecycle, and what data the successful response contains. Given the complexity and lack of annotations, agents need more context to use this safely.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description must compensate by explaining parameters. It provides minimal semantic context—mentioning 'pay a Lightning invoice' hints that payment_request is the Lightning invoice, and 'agent' maps to agent_id—but does not explicitly document the three parameters (agent_id, payment_request, mint_url), their formats, or when mint_url is required versus null.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Get[s] a quote to melt ecash into a Lightning payment' and references the technical standard NUT-05. The term 'melt' effectively distinguishes this tool from its sibling create_quote_api_payments_ecash_mint_quote_post (which handles minting), though it could explicitly clarify the mint vs. melt distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear positive guidance: 'Used when an agent wants to pay a Lightning invoice using their ecash.' This gives agents a clear trigger condition. However, it lacks explicit negative guidance (when not to use) or direct reference to the minting alternative.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_quote_api_payments_ecash_mint_quote_postBInspect

Create Quote

Create a Lightning invoice to fund ecash minting (NUT-04).

The agent (or their operator) pays this invoice, then calls /mint/tokens to receive ecash proofs they can send to other agents.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idYes
mint_urlNo
amount_satsYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and succeeds in explaining the post-creation lifecycle: the invoice must be paid and `/mint/tokens` must be called to receive proofs. It references the NUT-04 standard, adding technical context not found in structured fields.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The workflow explanation is concise and valuable, but the '### Responses' section adds minimal value beyond stating '200: Successful Response.' The redundant 'Create Quote' header also wastes space without adding semantic content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero schema coverage and no output schema, the description inadequately compensates for missing parameter documentation and fails to describe the return value (e.g., quote_id, invoice string) necessary for the subsequent workflow step it mentions. The behavioral workflow description partially salvages this.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 0% schema description coverage, the description fails to document any of the three parameters (`agent_id`, `amount_sats`, `mint_url`). While it mentions 'agent' and funding abstractly, it does not map these concepts to the specific required parameters or explain the optional `mint_url`.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool creates a Lightning invoice specifically to fund ecash minting (citing NUT-04 standard), distinguishing it from sibling melt/redeem operations. It specifies the exact verb (create/generate) and domain (ecash/Lightning).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description effectively outlines the multi-step workflow (create → pay → mint tokens), which implies when to use the tool. However, it lacks explicit guidance on when NOT to use it or direct comparison to the sibling `create_melt_quote` tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_single_agent_api_registry_discover__agent_id__postCInspect

Discover Single Agent

Discover / re-check a single agent's A2A Agent Card.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. While 'Discover / re-check' suggests the tool may fetch from a remote source or update cached state, it fails to disclose whether this is a read-only operation, whether it updates registry state, or what the discovery process entails. The 'Responses' section is boilerplate HTTP documentation that adds no behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The core description is appropriately concise ('Discover / re-check a single agent's A2A Agent Card'), but the inclusion of the '### Responses' section with HTTP status code boilerplate is irrelevant for AI agent decision-making and reduces signal-to-noise ratio.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, the description should explain what the tool returns (e.g., what an A2A Agent Card contains) or side effects of discovery. Instead, it only provides generic 'Successful Response' text. For a tool interfacing with a registry, details about the discovery mechanism or return structure are missing.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0% (agent_id has no description field). The description mentions 'single agent' which loosely maps to the agent_id parameter, but does not explain the parameter's format, expected values, or how to obtain a valid agent ID. With low schema coverage, the description fails to fully compensate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Discover[s] / re-check[s] a single agent's A2A Agent Card,' providing a specific verb and resource. The mention of 'single agent' helps distinguish it from the sibling 'trigger_discovery_cycle_api_registry_discover_post' (likely bulk), though it could better differentiate from 'get_agent_api_agents__agent_id__get'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions 're-check,' hinting that this can be used to refresh data, but provides no explicit when-to-use guidance or alternatives. It does not clarify when to use this registry discovery versus the public discovery endpoint ('a2a_public_discovery_api_agents_public_get') or direct agent retrieval ('get_agent_api_agents__agent_id__get').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_agent_api_agents__agent_id__getAInspect

Get Agent

Get full details for a specific AI agent by ID.

Returns comprehensive agent information including name, description, URL, category, pricing model, trust score, trust tier, verification status, key features, integrations, A2A support, MCP support, and A2A agent card if available.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idYesThe unique agent identifier, e.g. 'agent-0001'. Found in search/list results.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It compensates well for the missing output schema by enumerating comprehensive return fields (name, URL, trust tier, etc.), but fails to disclose error behaviors (e.g., 404 if ID not found), authentication requirements, or safety profile (read-only vs destructive).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description includes redundant headers ('Get Agent') and API-documentation boilerplate ('### Responses', '**200**: Successful Response') that don't add value for an AI agent. While the core content is relevant, the structure could be more front-loaded by removing the HTTP metadata.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the single-parameter simplicity and lack of output schema, the description adequately compensates by detailing the return payload comprehensively. However, it could be improved by noting the tool is read-only/safe to call, which would complete the behavioral profile missing from annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with the `agent_id` parameter fully documented in the schema (including format example 'agent-0001'). The description provides no additional parameter semantics beyond the schema, meeting the baseline expectation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves 'full details for a specific AI agent by ID' with specific verbs and resource. It distinguishes from siblings like `get_agent_trust` (which returns only trust data) and `search_agents` (which returns lists) by emphasizing comprehensive fields including trust score, A2A support, and MCP support.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the parameter description notes the ID is 'Found in search/list results' implying a workflow, the description lacks explicit when-to-use guidance contrasting it with `search_agents` or `list_agents`. No guidance on when not to use it or prerequisites beyond having the ID.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_agent_trust_api_agents__agent_id__trust_getCInspect

Get Agent Trust

Get the trust report for an agent.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but fails to deliver. It mentions a 'trust report' without explaining what data it contains, whether the operation is idempotent, cached, or real-time. The included HTTP 200 response boilerplate adds no actionable behavioral context for an AI agent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The core description ('Get the trust report for an agent') is appropriately concise and front-loaded. However, the inclusion of the '### Responses' section with HTTP 200 boilerplate wastes space without providing value to an AI agent selecting tools, slightly degrading the score.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero schema coverage, no annotations, and no output schema, the description is insufficient for a complete understanding. For even a simple single-parameter read operation, it should explain what constitutes a trust report or what the agent can expect to receive, rather than relying solely on the parameter name and title.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0% and the description fails to compensate. While 'for an agent' implicitly references the agent_id parameter, there is no explanation of the expected format (UUID? string?), where to obtain valid agent IDs, or whether this refers to the calling agent or a target agent.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves a 'trust report for an agent' (specific verb + resource), distinguishing it from the sibling get_agent_api_agents__agent_id__get which likely retrieves general agent metadata. However, it lacks detail on what constitutes a 'trust report' (e.g., reputation scores, certificates, verification status).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives like get_agent_api_agents__agent_id__get, nor any mention of prerequisites or conditions where this tool would be inappropriate. The description assumes the user already knows what an agent trust report is and why they need it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_badge_embed_code_api_badges__agent_id__embed_getCInspect

Get Badge Embed Code

Return embed code snippets for an agent's badge.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full disclosure burden. While it mentions returning embed code snippets, it fails to indicate if this is a safe read-only operation, whether the embed code includes scripts, or caching behavior. The Responses section appears to be auto-generated boilerplate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The core description is a single useful sentence, but it includes a '### Responses' section with boilerplate HTTP documentation that adds no value for an AI agent selecting the tool. Structure is front-loaded but not perfectly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool without an output schema, the description minimally covers the return value (embed code). However, the lack of parameter documentation and behavioral context leaves gaps that make the description barely adequate for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage for the required 'agent_id' parameter, the description must compensate but completely omits any parameter discussion. The parameter name is somewhat self-documenting, but the description adds no semantic context about what constitutes a valid agent ID.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it 'Return embed code snippets for an agent's badge,' distinguishing it from siblings like get_badge_json and get_badge_svg by specifying the embed format. However, it doesn't clarify what kind of embed code (HTML, iframe, JavaScript) is returned.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus the sibling JSON/SVG badge endpoints, nor are there prerequisites mentioned (e.g., whether the agent must be public or claimed).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_badge_json_api_badges__agent_id__json_getBInspect

Get Badge Json

Return badge metadata in shields.io endpoint format.

Can be used with: https://img.shields.io/endpoint?url=...

Responses:

200: Successful Response (Success Response) Content-Type: application/json

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses the shields.io integration pattern (behavioral trait), but omits safety characteristics like idempotency, caching behavior, or error responses (e.g., invalid agent_id).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The shields.io URL is high-value and appropriately placed. However, the '### Responses:' section appears to be auto-generated boilerplate that adds noise for an AI agent context, and the opening 'Get Badge Json' is redundant with the function name.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter read operation, the description covers the basic integration context (shields.io) but lacks output schema details. Without an output schema, the description should ideally describe the JSON structure (label, message, color fields) returned by the endpoint.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage for the required 'agent_id' parameter. The description fails to compensate by explaining what constitutes a valid agent ID, its format, or where to obtain it. The parameter is only implicitly referenced via the API path structure.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States it returns 'badge metadata in shields.io endpoint format,' which clearly identifies the output format and distinguishes it from sibling tools get_badge_svg and get_badge_embed_code. However, the opening 'Get Badge Json' partially restates the function name.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides the shields.io URL pattern ('https://img.shields.io/endpoint?url=...'), implying integration with external badge services. However, it lacks explicit guidance on when to choose this JSON variant over the SVG or embed code alternatives, and doesn't mention prerequisites like agent visibility.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_badge_svg_api_badges__agent_id__svg_getCInspect

Get Badge Svg

Return an SVG badge for an agent listing.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

ParametersJSON Schema
NameRequiredDescriptionDefault
styleNoBadge style (flat only for now)flat
agent_idYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. While it states the tool returns an SVG badge, it includes a confusing Responses section claiming 'Content-Type: application/json' which contradicts the expected SVG return format. It lacks information on side effects, rate limits, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The core description is a single efficient sentence. However, the inclusion of the '### Responses' section appears to be boilerplate auto-documentation that adds little value and potentially creates confusion with the 'application/json' content-type claim. It is front-loaded but contains some waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the low complexity (2 parameters, no nested objects) and lack of annotations or output schema, the description meets minimum viability by identifying the tool's basic function. However, gaps remain in parameter documentation, differentiation from sibling tools, and return value explanation (despite no output schema existing).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is only 50% (the `style` parameter is documented but `agent_id` is not). The description mentions 'agent listing' which loosely implies the agent_id parameter, but does not explain what constitutes a valid agent ID, its format, or where to obtain it. It fails to compensate for the schema's missing documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it 'Return[s] an SVG badge for an agent listing,' identifying the specific resource (SVG badge) and context (agent listing). It distinguishes from siblings like `get_badge_json` and `get_badge_embed_code` by specifying the SVG format, though it could clarify what the badge represents (e.g., trust score, verification status).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus the sibling badge tools (`get_badge_json`, `get_badge_embed_code`). It does not explain when an SVG format is preferred over JSON or HTML embed codes, nor does it mention any prerequisites or caching considerations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_card_history_api_agents__agent_id__card_history_getCInspect

Get Card History

Get AgentCard snapshot history for an agent.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
agent_idYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. While it mentions 'snapshot history,' it fails to explain pagination behavior (despite the limit parameter), result ordering (chronological?), what fields are included in historical snapshots, or whether deleted agents are included. The HTTP response boilerplate adds no behavioral value.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is brief but poorly structured. The first line repeats the tool name ('Get Card History'), followed by a functional sentence, then unhelpful HTTP response boilerplate ('### Responses: **200**...') that wastes space without aiding tool selection. The core content is front-loaded but diluted by the boilerplate.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and zero parameter descriptions, the description is insufficient. For a history endpoint, it should clarify the time range covered, sort order, and snapshot granularity. The absence of annotations increases the description's documentation burden, which it fails to meet.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%—neither `agent_id` nor `limit` have schema descriptions. The description text completely omits parameter documentation, failing to explain that `limit` constrains results (1-50, default 10) or that `agent_id` identifies the target agent. The description adds zero semantic value beyond the schema structure.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves 'AgentCard snapshot history for an agent,' distinguishing it from the sibling tool `get_agent_api_agents__agent_id__get` which likely retrieves the current card state. However, it doesn't explain what constitutes a 'snapshot' (e.g., versioned changes, timestamped records).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like `get_agent_api_agents__agent_id__get` (current state vs. history) or `discover_single_agent_api_registry_discover__agent_id__post`. No prerequisites or exclusion criteria are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_intake_api_broker_intake__intake_id__getBInspect

Get Intake

Check the status of a previously submitted broker intake request.

Returns the current status, any matched agents, and timestamps.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

ParametersJSON Schema
NameRequiredDescriptionDefault
intake_idYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses return values (status, matched agents, timestamps) which compensates for the missing output schema. However, it fails to explicitly declare the read-only/safe nature of the operation, error scenarios (e.g., invalid intake_id), or authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The core description is efficiently front-loaded in two sentences. However, the 'Get Intake' header restates the function name redundantly, and the '### Responses' section contains generic boilerplate that adds minimal value beyond stating the JSON content type.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a low-complexity tool with one parameter and no output schema, the description adequately covers the basic purpose and return structure. However, gaps remain regarding parameter validation, error handling, and explicit safety guarantees that would make it fully complete given the lack of structured metadata.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage for the intake_id parameter. The description compensates by contextualizing it as referring to a 'previously submitted broker intake request,' implying the parameter accepts an identifier returned from a prior submission. It meets the baseline for a single-parameter tool but lacks explicit format or source documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Check[s] the status of a previously submitted broker intake request,' providing a specific verb and resource. It distinguishes from the sibling submit_intake tool by emphasizing 'previously submitted,' though it could explicitly name the sibling tool for maximum clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The phrase 'previously submitted' implies this tool should be used after creating an intake (likely via submit_intake_api_broker_intake_post), providing implied workflow context. However, it lacks explicit when-to-use guidance or explicit comparison to alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_mint_info_api_payments_ecash_mint_info_getCInspect

Get Mint Info

Get info about a Cashu mint (NUT-06).

Responses:

200: Successful Response (Success Response) Content-Type: application/json

ParametersJSON Schema
NameRequiredDescriptionDefault
mint_urlNohttps://mint.minibits.cash/Bitcoin
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but offers minimal value. While 'NUT-06' implies a protocol-specified metadata response, the description does not clarify what information is returned (capabilities, fees, contact info), caching behavior, or authentication requirements. The '200: Successful Response' boilerplate adds no behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description suffers from poor structure, beginning with redundant titling ('Get Mint Info') followed by a near-identical sentence. It includes irrelevant HTTP response documentation ('### Responses: **200**: Successful Response...') which wastes tokens and does not aid agent decision-making.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of the Cashu ecash ecosystem (evidenced by 30+ sibling tools) and lack of output schema, the description is insufficient. It fails to explain the content of the NUT-06 response, why an agent would query mint info versus other mint endpoints, or how this fits into the broader payment workflow.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0% (the `mint_url` parameter has no description field), yet the tool description fails to compensate by explaining what a mint URL is, what format it expects, or that it defaults to a specific minibits endpoint. The parameter's purpose is only implicitly inferred from the tool's stated purpose.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with tautological repetition of the tool name ('Get Mint Info'), but partially recovers by specifying the domain ('Cashu mint') and protocol standard ('NUT-06'). However, it fails to distinguish from sibling tools like `get_mint_keys` or `get_mint_keysets`, leaving ambiguity about what specific 'info' this retrieves.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus the numerous sibling mint-related tools (e.g., `get_mint_keys`, `create_quote`). There are no prerequisites, conditions, or workflow positioning hints.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_mint_keys_api_payments_ecash_mint_keys_getCInspect

Get Mint Keys

Get active public keys from a Cashu mint (NUT-01).

Responses:

200: Successful Response (Success Response) Content-Type: application/json

ParametersJSON Schema
NameRequiredDescriptionDefault
mint_urlNohttps://mint.minibits.cash/Bitcoin
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It adds valuable context by specifying 'active' keys (implying non-expired/revoked) and citing NUT-01 protocol compliance. However, it omits safety characteristics (idempotency, read-only nature), error conditions, or what constitutes a valid mint_url format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The core description is efficiently two sentences, but the inclusion of the '### Responses' boilerplate adds noise without value—the 200 response description merely states 'Successful Response' without detailing the key format or structure. The actual useful description is buried below a tautological header.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter retrieval tool, the description adequately explains the high-level purpose but lacks completeness due to missing parameter documentation and no output schema. Given the lack of annotations and response schema, the description should have detailed the expected return value (the key structure) or authentication requirements.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0% for the mint_url parameter, yet the description fails to compensate. While it mentions 'Cashu mint' conceptually, it does not explain the mint_url parameter, its default value (https://mint.minibits.cash/Bitcoin), or expected URL format, leaving the agent to infer the connection.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves 'active public keys from a Cashu mint' and references the NUT-01 protocol standard, providing specific verb, resource, and technical context. It effectively distinguishes from siblings like get_mint_keysets (which returns keyset metadata) by specifying 'active public keys'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus related siblings such as get_mint_keysets or get_mint_info. There are no exclusions, prerequisites, or workflow context to help an agent determine if this is the correct mint-related endpoint for their needs.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_mint_keysets_api_payments_ecash_mint_keysets_getCInspect

Get Mint Keysets

Get all keysets from a Cashu mint (NUT-02).

Responses:

200: Successful Response (Success Response) Content-Type: application/json

ParametersJSON Schema
NameRequiredDescriptionDefault
mint_urlNohttps://mint.minibits.cash/Bitcoin
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but offers minimal context. While it cites NUT-02 compliance, it fails to indicate whether this is a safe read-only operation, idempotent, or if there are rate limits. The 'Responses' section provides only generic HTTP status information.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description wastes significant space on an irrelevant HTTP 'Responses' section (### Responses, 200 status codes) which is inappropriate for an MCP tool interface. The actual functional description is buried after a tautological header ('Get Mint Keysets').

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter lookup tool with no output schema, the description is inadequate. It lacks parameter documentation, usage context against siblings, and behavioral safety indicators. The NUT-02 reference is the only saving grace for domain context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0% (the mint_url parameter has no description field), and the description fails to compensate. It mentions fetching 'from a Cashu mint' but does not explicitly document the mint_url parameter, its default value ('https://mint.minibits.cash/Bitcoin'), or expected format.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the action ('Get') and resource ('Mint Keysets') and references the NUT-02 Cashu protocol standard, which helps distinguish it from siblings like get_mint_keys and get_mint_info. However, it assumes domain knowledge of what a 'keyset' is without explaining the distinction from 'keys'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus siblings like get_mint_keys or get_mint_info, nor are there any prerequisites mentioned (e.g., knowing the mint URL). The description only states what the tool does, not when to invoke it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_payment_profile_api_payments_ecash_profile__agent_id__getCInspect

Get Payment Profile

Get an agent's ecash payment profile.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. While 'Get' implies a read operation, the description does not explicitly confirm it is non-destructive, safe to retry, or idempotent. It also fails to describe what constitutes a 'payment profile' (balances, preferences, limits) or what happens if the agent_id is invalid.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is relatively short, but includes irrelevant HTTP-specific documentation ('### Responses: **200**: Successful Response') that provides no value to an AI agent selecting tools. The front-loaded title 'Get Payment Profile' restates the function but the structure mixes operational description with HTTP protocol details inefficiently.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, and the presence of closely related sibling tools, the description is insufficient. It does not explain what data fields comprise an 'ecash payment profile' or how this relates to the broader ecash payment system (mint, quotes, transactions), leaving significant gaps in contextual understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 0% description coverage for the 'agent_id' parameter. The description mentions 'agent's' which loosely maps to the parameter, but provides no format guidance (UUID vs username), validation rules, or semantics about which agent's profile can be retrieved (self vs others). With zero schema coverage, the description should compensate more than it does.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves an 'agent's ecash payment profile' with a specific verb and resource. However, it does not explicitly differentiate from the sibling tool 'agent_payment_summary_api_payments_ecash_summary__agent_id__get' (profile vs summary) or 'update_payment_profile_api_payments_ecash_profile__agent_id__put', leaving potential ambiguity about which to use for payment-related queries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like the payment summary endpoint or update endpoint. It omits prerequisites (e.g., whether the agent must be registered first) and gives no indication of error conditions or when the profile might be unavailable.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_registry_stats_api_registry_stats_getBInspect

Get Registry Stats

Get registry-wide statistics.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but fails to mention auth requirements, rate limits, caching behavior, or whether the statistics are real-time or cached. It only notes the HTTP 200 response status, which provides minimal behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While the core description is brief, it includes an unnecessary '### Responses' section with HTTP status boilerplate that is irrelevant for AI agent tool selection. This cruft reduces the signal-to-noise ratio.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema and annotations, the description is minimally adequate for a zero-parameter endpoint but lacks specifics about what statistics are returned (e.g., total agents, categories). It meets the bare minimum but leaves significant gaps for an agent expecting to understand the return value.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters, establishing a baseline score of 4. The description correctly implies no filtering is needed via 'registry-wide,' which aligns with the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool 'Get[s] registry-wide statistics,' providing a clear verb and resource. It sufficiently distinguishes from siblings like list_agents or search_agents by specifying 'statistics' rather than individual records, though it could clarify what specific statistics (e.g., count, usage metrics) are returned.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like list_agents or search_agents, nor does it mention prerequisites such as authentication requirements. The usage context must be inferred entirely from the tool name.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_scan_results_api_scanner_results_getCInspect

Get Scan Results

Get recent A2A Agent Card scan results.

Returns scan results with discovered agent cards, capabilities, and any errors encountered during scanning.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
offsetNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden but offers minimal behavioral context. It does not explain how 'recent' results are cached, pagination behavior beyond the raw parameters, rate limits, or authentication requirements. The 'Responses' section merely repeats standard HTTP 200 boilerplate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is moderately concise but contains redundant elements. The 'Get Scan Results' header duplicates the first sentence, and the 'Responses' section adds little value for an AI agent. The core content could fit in 2-3 sentences without the markdown ceremony.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of annotations and output schema, the description is insufficient. While it hints at response contents ('agent cards, capabilities, errors'), it lacks critical context: result freshness guarantees, pagination guidance, error handling patterns, or the relationship between offset/limit and the total result set.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage for the limit and offset parameters, and the description completely fails to compensate. There is no mention of pagination semantics, default behaviors (50 results), or maximum limits (200).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves 'recent A2A Agent Card scan results' and mentions the returned content ('discovered agent cards, capabilities, and any errors'). The verb 'Get' and resource 'Scan Results' are explicit, distinguishing it from the sibling trigger_scan tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus the sibling trigger_scan_api_scanner_scan_post or other discovery tools. It fails to mention the workflow relationship (likely: trigger scan first, then poll/get results) or when results are available.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_agents_api_agents_getAInspect

List Agents

List AI agents in the Agentry directory with optional filtering and pagination.

Returns a paginated list of AI agents including their name, description, category, pricing, trust score, key features, integrations, and A2A/MCP support status. The directory contains 122+ agents across 11 categories.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of agents to return per page. Range: 1-100.
offsetNoNumber of agents to skip for pagination. Use with limit for paging.
categoryNoFilter agents by category name (e.g. 'Sales & Outreach', 'Customer Support', 'Development Tools'). Returns all categories if omitted.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and performs well. It discloses the paginated nature of results and comprehensively lists returned fields (name, description, pricing, trust score, etc.), giving clear expectations of the response structure. It does not mention rate limits or authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The core content is appropriately front-loaded, but the description contains structural waste: the redundant 'List Agents' header (repeating the tool name) and the '### Responses' boilerplate section which provides no value to an AI agent selecting the tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of an output schema, the description adequately compensates by detailing the returned data structure and pagination behavior. For a simple 3-parameter read-only endpoint, the description provides sufficient context to invoke the tool correctly, though it could clarify the relationship between the 'limit' parameter and the 122+ agent dataset size.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description mentions 'optional filtering and pagination' in the abstract but does not add specific semantic guidance for individual parameters (e.g., recommended default behaviors or category selection strategies) beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action ('List AI agents') and resource ('Agentry directory'). Adds valuable scope context ('122+ agents across 11 categories') that distinguishes it from generic agent listings. However, it does not explicitly differentiate from the sibling 'search_agents' endpoint.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage context through 'optional filtering and pagination,' suggesting it's for browsing the directory. However, it lacks explicit guidance on when to use this versus the 'search_agents' sibling or when to use the 'category' filter versus other filtering methods.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_categories_api_agents_categories_getAInspect

List Categories

List all agent categories with counts.

Returns every category in the directory along with the number of agents in each. Useful for building category filters or understanding the directory's coverage areas.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

Example Response:

[
  {
    "category": "Category",
    "count": 1
  }
]
ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It compensates well by including an example response showing the exact return structure, but lacks explicit statements about safety (read-only nature), authentication requirements, or error conditions that would normally be covered by annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with the purpose front-loaded, followed by use cases and a helpful example response. The markdown headers ('### Responses:') add slight verbosity, but the example JSON provides substantial value that justifies the length.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (zero parameters, simple list operation), the description is adequately complete. It compensates for the missing output schema by providing an example response, and explains the scope ('every category'). It lacks only safety/auth annotations to be fully comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters. According to calibration rules, 0 parameters warrants a baseline score of 4. The description appropriately implies no filtering is possible ('Returns every category'), confirming the empty schema semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool 'List[s] all agent categories with counts', providing a specific verb, resource, and distinguishing attribute. It clearly differentiates from sibling tools like list_agents_api_agents_get by focusing on taxonomy/metadata rather than individual agent records.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear usage context ('Useful for building category filters or understanding the directory's coverage areas'), indicating when to invoke the tool. However, it does not explicitly name sibling alternatives (e.g., 'use list_agents to get actual agent details rather than categories').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_ecash_agents_api_payments_ecash_agents_getBInspect

List Ecash Agents

List all agents that have ecash payments enabled.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but offers minimal information. It mentions a 200 JSON response but omits pagination behavior, rate limits, authentication requirements, and what data fields are returned for each agent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description contains slight redundancy by repeating 'List Ecash Agents' as a header then 'List all agents...' as the opening sentence. The inclusion of the HTTP Response section (### Responses) adds structured but potentially unnecessary detail for an MCP tool description, though it does not significantly detract from readability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of both annotations and an output schema, the description should explain the return value structure or at least key fields returned for each agent. It only states 'Successful Response (Success Response)' and 'application/json', leaving the agent data structure undocumented.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters, which establishes a baseline score of 4. The description does not need to compensate for missing parameter documentation, though it could have clarified why no filtering parameters are available (e.g., 'returns all enabled agents without filtering').

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action (List) and resource (Ecash Agents), and explicitly qualifies the scope ('that have ecash payments enabled'). This effectively distinguishes it from the sibling tool 'list_agents_api_agents_get' which presumably lists all agents without this filter.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no explicit guidance on when to use this tool versus alternatives like the generic 'list_agents_api_agents_get' or agent-specific payment tools. While the filtering criteria is mentioned, there are no 'when to use' or 'when not to use' directives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_transactions_api_payments_ecash_transactions_getCInspect

List Transactions

List ecash transactions, optionally filtered by agent or type.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
offsetNo
tx_typeNo
agent_idNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. While 'List' implies a read-only operation, the description fails to disclose pagination behavior, default sorting, maximum limits, or what transaction data is returned (only noting generic 'Successful Response').

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description wastes space with redundant titling ('List Transactions' followed by 'List ecash transactions') and includes boilerplate HTTP response metadata ('### Responses:', '**200**:') that provides no value to an AI agent selecting tools. Every sentence does not earn its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 4 parameters with zero schema coverage, no annotations, and no output schema, the description is inadequate. It lacks explanations of the pagination model, transaction type enumeration, or response structure necessary for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description must compensate for all 4 parameters. It only loosely references 'agent' and 'type' filters (agent_id and tx_type) but completely omits the pagination parameters (limit/offset) and provides no guidance on valid tx_type values or syntax.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the core action ('List') and resource ('ecash transactions'), and distinguishes the general transaction listing from sibling tools like 'agent_payment_summary' or 'check_quote'. However, it misses explicit differentiation from other payment-related list operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions optional filtering ('by agent or type') but provides no guidance on when to use this versus the sibling 'agent_payment_summary' tool, no prerequisites for access, and no warnings about pagination requirements.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

payment_required_api_payments_ecash_402__agent_id__getAInspect

Payment Required

Return a 402 Payment Required response with X-Cashu headers.

This endpoint lets agents advertise that they charge for services. Calling agents receive:

  • HTTP 402 status

  • X-Cashu header with mint + amount info

  • Body with payment instructions

The calling agent then mints a token and sends it via /send or includes it in the X-Cashu request header.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses the 402 status code, X-Cashu headers containing mint+amount info, and payment instructions in the body. However, it confusingly lists '**200**: Successful Response' at the end, contradicting the 402 claim, and fails to clarify if this is a safe read operation or creates a payment request record.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The main description is front-loaded and efficient. However, the '### Responses:' section at the end creates confusion by claiming a 200 status when the tool is explicitly for returning 402. This section appears to be OpenAPI spec leakage that should be removed or clarified.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the protocol complexity (Cashu ecash 402 pattern) and lack of annotations/output schema, the description adequately explains the interaction contract. However, gaps remain: the undocumented agent_id parameter, the 200/402 ambiguity, and missing safety/idempotency guidance prevent a higher score.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0% (agent_id has no description), so the description must compensate. It fails to do so—never explaining what the agent_id represents (the agent being queried for payment requirements), its format, or constraints. The parameter is effectively undocumented.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Return[s] a 402 Payment Required response with X-Cashu headers' and specifies it 'lets agents advertise that they charge for services.' This distinguishes it from siblings like send_ecash or receive_ecash which handle actual payment transfer, while this one initiates the payment requirement protocol.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains the specific use case (advertising charges) and outlines the interaction flow: calling agents receive 402 status and headers, then mint tokens and send via /send (referencing the sibling send tool). It lacks explicit 'when not to use' guidance but provides clear contextual usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

receive_ecash_api_payments_ecash_receive_postCInspect

Receive Ecash

Receive/redeem a Cashu ecash token.

The agent swaps the received proofs for new ones at the mint, ensuring the sender can't double-spend. This is the standard Cashu receive flow (NUT-03 swap).

Responses:

200: Successful Response (Success Response) Content-Type: application/json

ParametersJSON Schema
NameRequiredDescriptionDefault
tokenYes
agent_idYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully explains the internal Cashu protocol mechanics (proof swapping, NUT-03 standard) but fails to disclose critical operational traits like whether the operation is destructive (token consumption), idempotent, or what error conditions might occur. The 'Responses' section adds minimal value beyond HTTP status semantics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description contains redundant elements ('Receive Ecash' header restates the tool name) and a verbose 'Responses' section that appears to be auto-generated OpenAPI documentation rather than useful selection guidance. The core technical explanation is appropriately concise, but structural waste reduces clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a financial/cryptographic operation, the description provides adequate protocol context (NUT-03) but lacks completeness regarding error handling, parameter specifics, and destructive nature of the operation. Given the absence of annotations and output schema, additional safety and operational context was needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, requiring the description to fully document the 'token' and 'agent_id' parameters. While 'token' is implicitly referenced ('Cashu ecash token'), its format/encoding is not specified, and 'agent_id' is not mentioned at all. The description does not compensate for the lack of schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Receive[s]/redeem[s] a Cashu ecash token' and explains the cryptographic mechanism (swapping proofs at the mint to prevent double-spend). It references the specific protocol (NUT-03 swap), providing domain-specific clarity. However, it does not explicitly differentiate from the sibling verify_token tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description explains what the tool does internally, it provides no explicit guidance on when to use this tool versus alternatives like verify_token_api_payments_ecash_verify_post (for checking validity without consuming) or send_ecash_api_payments_ecash_send_post. No prerequisites or exclusion criteria are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

register_agent_api_agents_register_postAInspect

Register Agent

Register a new AI agent in the Agentry directory.

Submit an AI agent for listing. The agent will be added immediately and appear in search results. Optional fields like pricing, features, and integrations improve discoverability. An A2A discovery scan will be triggered automatically if the agent URL is reachable.

Responses:

201: Successful Response (Success Response) Content-Type: application/json

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesPublic URL where the agent is accessible
nameYesName of the AI agent to register
categoryNoCategory for the agent (e.g. Sales & Outreach, Customer Support, Development Tools)Uncategorized
a2a_supportNoWhether the agent supports the A2A protocol (Yes/No/Unknown)Unknown
descriptionNoA brief description of what the agent does and its capabilities
mcp_supportNoWhether the agent exposes MCP tools (Yes/No/Unknown)Unknown
integrationsNoComma-separated list of integrations (e.g. Slack, Salesforce, GitHub)
key_featuresNoComma-separated list of key features and capabilities
contact_emailNoContact email for the agent developer/company
pricing_modelNoPricing model: Free, Freemium, Subscription, Pay-per-use, Contact for pricingUnknown
starting_priceNoStarting price or pricing tier (e.g. Free, $10/mo, Contact)Unknown
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses side effects (immediate listing appearance, automatic A2A discovery scan trigger) but omits critical mutation details: idempotency behavior, conflict handling if agent exists, auth requirements, and what data the 201 response contains.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose. The 'Responses' section at the end is somewhat boilerplate and adds minimal value since it lacks payload details, but overall the text is efficient with little waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For an 11-parameter write operation with no output schema, the description covers immediate effects and optional field purposes. However, it fails to describe the return value structure (critical for obtaining the created agent ID) and lacks error scenario coverage, leaving gaps in contextual understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description adds value by explaining that optional fields (pricing, features, integrations) improve discoverability, providing semantic motivation for populating specific parameters beyond the schema's type definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Register[s] a new AI agent in the Agentry directory' with specific verbs (register, submit, listing). It distinguishes from siblings implicitly by emphasizing 'new' agent registration, though it could explicitly contrast with claim_agent for clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context ('Submit an AI agent for listing') and mentions that optional fields 'improve discoverability,' suggesting when to populate them. However, it lacks explicit guidance on when to use this versus the sibling claim_agent tool or prerequisites for registration.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_agents_api_agents_search_getBInspect

Search Agents

Search the Agentry AI agent directory by keyword.

Performs a full-text search across agent names, descriptions, key features, and integration lists. Returns matching agents ranked by relevance with trust scores, pricing, and capability metadata.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

ParametersJSON Schema
NameRequiredDescriptionDefault
qNoSearch keyword to match against agent name, description, features, and integrations. Examples: 'customer support', 'slack', 'code review', 'sales automation'.
limitNoMaximum number of search results to return. Range: 1-100.
offsetNoNumber of results to skip for pagination.
categoryNoOptionally narrow search results to a specific category (e.g. 'Sales & Outreach', 'Customer Support').
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively documents the search scope (full-text across names, descriptions, features, integrations) and return format (ranked results with metadata). However, it omits safety information (read-only status), rate limits, and error handling behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately front-loaded with the core purpose and search behavior in the first two sentences. The 'Responses' section at the end contains minimal value (just stating 'Successful Response'), but the overall length is reasonable and sentences are information-dense.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the 100% schema coverage, the description adequately explains the tool's purpose and return data without needing to reiterate parameter details. However, it should explicitly note that all parameters are optional (all have defaults), and the lack of output schema means the text description of returns is necessary but incomplete regarding specific data types and structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with each parameter (q, limit, offset, category) fully documented with types, defaults, and constraints. The description mentions searching 'by keyword' which aligns with the `q` parameter, but adds no additional semantic details, examples, or validation rules beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs a full-text search of the Agentry AI agent directory using keywords, specifying searchable fields (names, descriptions, features, integrations) and return data (trust scores, pricing, metadata). However, it does not explicitly distinguish this from sibling tools like `list_agents_api_agents_get` or `get_agent_api_agents__agent_id__get`.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains what the tool does but provides no guidance on when to select it over alternatives such as `list_agents_api_agents_get` for browsing all agents or `get_agent_api_agents__agent_id__get` for retrieving specific agents. There are no stated prerequisites, exclusions, or conditions for use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

send_ecash_api_payments_ecash_send_postAInspect

Send Ecash

Send ecash tokens from one agent to another.

Two modes:

  1. Provide a pre-minted token (cashuA...) — we verify and record the transfer

  2. No token — returns instructions for the sender to mint one first

The token is bearer — whoever holds it can redeem it. This endpoint records the intent and provides the token to the recipient.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

ParametersJSON Schema
NameRequiredDescriptionDefault
memoNo
tokenNo
amount_satsYes
sender_agent_idYes
recipient_agent_idYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses critical behavioral traits: the bearer-token security model ('whoever holds it can redeem it'), the dual-mode operation, and the intent-recording mechanism. It lacks details on idempotency, side effects (e.g., whether the sender's token is burned), or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The structure is logical with a clear header, purpose statement, enumerated modes, and security warning. It is appropriately sized. The 'Responses' section at the end is boilerplate filler that adds no value, preventing a perfect score.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, the description adequately explains the core transaction flow and bearer-token risks. However, it lacks specifics on return value structures (beyond 'instructions' for mode 2) and does not document all parameters, leaving gaps for a financial operation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, requiring the description to compensate. It explicitly documents the 'token' parameter (including the 'cashuA...' format) and implicitly references sender/recipient agents and amounts through the workflow description. However, it completely omits the 'memo' parameter and does not clarify parameter relationships (e.g., that 'amount_sats' is required when 'token' is null).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Send[s] ecash tokens from one agent to another' with a specific verb and resource. It distinguishes from siblings like 'receive_ecash' and 'verify_token' by focusing on the outbound transfer workflow and detailing the two operational modes (pre-minted token vs. mint instructions).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear usage context through the two-mode explanation (providing a token vs. receiving mint instructions), effectively guiding when to use each mode. However, it does not explicitly name sibling alternatives like 'create_quote' or 'receive_ecash' for scenarios where this tool is inappropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

send_outreach_api_admin_outreach_postCInspect

Send Outreach

Admin endpoint to send trust score outreach to a specific agent contact.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idYes
contact_emailYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. It identifies the action as trust score outreach but fails to explain the delivery mechanism (email, notification, etc.), side effects, idempotency, or whether the operation is reversible.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The core description ('Admin endpoint to send trust score outreach...') is concise and front-loaded. However, the inclusion of the HTTP Responses section ('200: Successful Response') adds boilerplate noise that provides no value for tool selection.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a two-parameter tool with simple string inputs, the description provides minimal viable context. However, given the lack of annotations, output schema, and parameter descriptions, it omits important behavioral context about what 'sending outreach' actually entails.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, requiring the description to compensate. While it mentions 'agent' and 'contact' loosely mapping to the two parameters, it does not clarify that agent_id identifies the target agent or specify the expected format/role of contact_email.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the action ('send trust score outreach'), the target ('specific agent contact'), and the authorization level ('Admin endpoint'). It distinguishes from sibling discovery/read tools (e.g., get_agent_trust) by specifying this is an administrative outreach action rather than a data retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While 'Admin endpoint' hints at required privileges, there is no explicit guidance on when to use this versus alternatives like get_agent_trust, nor any mention of prerequisites, rate limits, or conditions under which outreach should not be sent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

submit_intake_api_broker_intake_postBInspect

Submit Intake

Submit a broker intake request to find the right AI agent for your needs.

Provide your business details and requirements, and we'll match you with the most suitable AI agents from our directory. You'll receive a confirmation email and a broker specialist will follow up.

Responses:

201: Successful Response (Success Response) Content-Type: application/json

ParametersJSON Schema
NameRequiredDescriptionDefault
emailNo
needsNo
toolsNo
budgetNo
urgencyNoexploring
business_nameNo
business_typeNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adds valuable context about side effects (confirmation email sent, human follow-up initiated) and references the 201 response code. However, it omits critical safety information like whether submissions are idempotent, rate limits, or authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description contains redundancy ('Submit Intake' header restating the action) and includes a boilerplate 'Responses' section that adds minimal value for an AI agent. However, the core operational description is appropriately concise and front-loaded with the essential action and purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the high parameter count (7) with zero schema documentation, the description adequately explains the business purpose and outcome but falls short in parameter guidance. It sufficiently covers the tool's intent for a submission workflow but leaves significant gaps in field-level documentation that would aid correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 0% description coverage across 7 parameters. The description vaguely references 'business details and requirements' but fails to map these concepts to specific fields (email, business_name, business_type, needs, tools, budget, urgency) or explain formats, constraints, or the 'exploring' default value for urgency. With zero schema coverage, this compensation is insufficient.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Submit a broker intake request') and resource purpose ('to find the right AI agent for your needs'). It effectively distinguishes from the sibling tool 'get_intake_api_broker_intake__intake_id__get' by emphasizing the submission/c creation action versus retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use the tool ('find the right AI agent for your needs') and outlines the post-submission workflow ('confirmation email and a broker specialist will follow up'). However, it lacks explicit guidance on when NOT to use it or direct comparisons to alternatives like direct agent search tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

trigger_discovery_cycle_api_registry_discover_postBInspect

Trigger Discovery Cycle

Run a full discovery cycle across all agents in the registry.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. While it mentions 'Trigger' and 'Run,' it does not disclose whether this is an asynchronous operation, what specific actions the discovery performs, potential side effects, rate limits, or what constitutes a successful response beyond the HTTP 200 boilerplate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The first two sentences efficiently convey the tool's purpose. However, the inclusion of the '### Responses' section with HTTP 200 boilerplate wastes space and adds noise irrelevant to AI tool selection, reducing the signal-to-noise ratio.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has zero parameters and no output schema, the description provides minimum viable context. However, for a 'trigger' operation (implied POST), it should ideally explain what the discovery cycle entails or what the expected outcome is, given the lack of annotations or structured output definitions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters. According to the scoring rubric, 0 params equals a baseline score of 4. The description does not need to compensate for missing schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Run[s] a full discovery cycle across all agents in the registry,' using a specific verb and resource. The scope 'across all agents' implicitly distinguishes it from the sibling tool discover_single_agent_api_registry_discover__agent_id__post, though it does not explicitly explain what constitutes a 'discovery cycle.'

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It fails to mention the sibling discover_single_agent_api_registry_discover__agent_id__post or clarify when a full registry scan is preferred over single-agent discovery.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

trigger_scan_api_scanner_scan_postBInspect

Trigger Scan

Scan domains for A2A Agent Card discovery.

Checks the given domains for /.well-known/agent.json endpoints and extracts agent capability metadata. Results are stored and can be retrieved via the scan results endpoint.

Responses:

202: Successful Response (Success Response) Content-Type: application/json

ParametersJSON Schema
NameRequiredDescriptionDefault
domainsYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the async nature via the '202' response code and mentions the side effect of storing results, but omits auth requirements, rate limits, error behaviors, or validation rules (e.g., max domain count).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Front-loaded with clear action statement. Main description is efficient (3 sentences). The 'Responses' section appears to be auto-generated OpenAPI text that could be integrated more smoothly, but overall minimal waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers the core happy-path functionality (trigger scan, 202 accepted, async retrieval) adequately for a simple trigger tool. However, lacks error handling documentation, authentication context, and sufficient parameter details given the absence of output schema and annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0% (no parameter descriptions). The description mentions 'Checks the given domains' which loosely maps to the 'domains' parameter, but fails to compensate for the schema gap by explaining expected format (protocol required? bare domains?), constraints, or providing examples.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('Scan') and resource ('domains'), explicitly stating the goal of 'A2A Agent Card discovery'. Distinguishes itself from sibling get_scan_results_api_scanner_results_get by noting that 'Results are stored and can be retrieved via the scan results endpoint', clarifying this tool initiates rather than retrieves.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies a workflow sequence (trigger then retrieve) by referencing the scan results endpoint, but lacks explicit when-to-use guidance (e.g., 'use this for bulk domain scanning vs single agent lookup') and does not mention prerequisites like domain format requirements.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

update_payment_profile_api_payments_ecash_profile__agent_id__putBInspect

Update Payment Profile

Update an agent's ecash payment profile.

Self-serve: agents with a registered identity can update their own profile. Admin key still works for platform operations.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idYes
price_satsNo
ecash_enabledNo
accepted_mintsNo
cashu_mint_urlNo
last_payment_atNo
total_sent_satsNo
payment_requiredNo
lightning_addressNo
total_received_satsNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. It successfully explains the authorization model (self-serve vs admin), but fails to specify mutation semantics (partial vs full update), idempotency, side effects, or error behaviors for this 10-parameter write operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description includes boilerplate HTTP response documentation ('### Responses: **200**: Successful Response...') which provides no value to an AI agent selecting tools. The front-loaded purpose statement is efficient, but the inclusion of HTTP status metadata wastes space that could describe parameters.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex mutation tool with 10 parameters, zero schema coverage, no annotations, and no output schema, the description is significantly incomplete. It lacks parameter documentation, return value structure, and error handling guidance necessary for safe invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0% across 10 parameters (including complex fields like 'accepted_mints' and 'cashu_mint_url'). The description completely fails to compensate by explaining any parameter semantics, valid values, or interdependencies between fields like 'ecash_enabled' and 'cashu_mint_url'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Update an agent's ecash payment profile' providing a specific verb (update) and resource (ecash payment profile). However, it does not explicitly distinguish from the sibling 'get_payment_profile' tool, relying instead on the tool name verb difference.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides authentication context ('Self-serve: agents with a registered identity can update their own profile. Admin key still works...'), indicating who can invoke it. However, it lacks explicit guidance on when to use this versus other payment-related tools like 'send_ecash' or 'create_melt_quote'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

verify_token_api_payments_ecash_verify_postBInspect

Verify Token

Verify a Cashu ecash token — check format, amount, and spent status.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

ParametersJSON Schema
NameRequiredDescriptionDefault
tokenYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully explains what aspects are verified (format, amount, spent status) and includes the 200 response documentation, but fails to disclose whether this is a read-only operation, potential error states, or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description contains redundant header text ('Verify Token') that restates the tool name, but the core sentence is efficient. The 'Responses' section adds value given the lack of output schema, though it is somewhat boilerplate. Overall, front-loading is adequate but not optimal.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter verification tool, the description covers the essential purpose and basic response structure. However, given the 0% schema coverage and lack of annotations, it is incomplete regarding parameter details, error handling scenarios, and operational constraints (e.g., rate limits, mint-specific requirements).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%—the 'token' parameter lacks any description field. The description mentions 'Cashu ecash token' in the general explanation, providing implicit context for the parameter, but does not explicitly document the parameter's format, encoding requirements, or provide examples to compensate for the schema gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Verify a Cashu ecash token' and specifies exactly what is validated: 'check format, amount, and spent status.' This specific verb+resource combination, along with the detailed verification scope, effectively distinguishes it from sibling tools like send_ecash or receive_ecash.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives (e.g., when to verify vs. directly receive), nor does it mention prerequisites such as needing a token from a specific mint or when verification should be performed in the payment workflow.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.