Skip to main content
Glama

Server Details

Wormhole - 69 tools for cross-chain transfers and bridge data

Status
Unhealthy
Last Tested
Transport
Streamable HTTP
URL
Repository
junct-bot/wormhole-mcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

73 tools
api_v1_native_token_transfer_activityAInspect

Returns a list of values (tx count or notional) of the Native Token Transfer for a emitter and destination chains. — Returns a list of values (tx count or notional) of the Native Token Transfer for a emitter and destination chains.

PREREQUISITE: Call one of [get_last_transactions, api_v1_native_token_transfer_token_list, get_operations] first to discover valid identifiers.

ParametersJSON Schema
NameRequiredDescriptionDefault
byNoRenders the results using notional or tx count (default is notional).
symbolYesSymbol of the native-token-transfer token.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses the return type (list of values) and metric options (notional vs tx count), but lacks safety profile (read-only vs destructive), rate limits, or detailed return structure since no output schema exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Suffers from severe redundancy—the opening sentence is duplicated verbatim with only an em-dash separator. This wastes tokens and creates noise. The prerequisite section is well-structured but cannot overcome the duplicated content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema, the description minimally explains what gets returned (list of values). The prerequisite guidance adds completeness for the input side, but the description should clarify whether emitter/destination chains are filter parameters or return data dimensions given the schema lacks these fields.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with clear descriptions for both `symbol` and `by` parameters. The description adds valuable prerequisite context about discovering valid identifiers, which helps the agent understand how to populate the required `symbol` parameter correctly.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

States it returns values (tx count/notional) for Native Token Transfers, but the phrase 'for a emitter and destination chains' is ambiguous—unclear if these are input parameters or output dimensions (parameters don't include chains). The duplication of the opening sentence creates confusion about the actual scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Excellent prerequisite section explicitly states to call discovery tools (`get_last_transactions`, `api_v1_native_token_transfer_token_list`, `get_operations`) first to find valid identifiers. This clearly guides the agent on sequencing and dependencies.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

api_v1_native_token_transfer_summaryCInspect

Returns a summary of the Native Token Transfer. — Returns a summary of the Native Token Transfer. Returns: { circulatingSupply: number, fullyDilutedValuation: number, image: { large: string, small: string, thumb: string }, ... }.

ParametersJSON Schema
NameRequiredDescriptionDefault
coingecko_idYescoingecko_id of the desired token.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. While it documents the return data structure (compensating for the missing output schema), it fails to disclose operational traits: whether this is a safe read-only operation, if it requires authentication, rate limits, or caching behavior. It does not mention potential error conditions or what happens if the coingecko_id is invalid.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description wastefully repeats the same sentence twice ('Returns a summary of the Native Token Transfer. — Returns a summary of the Native Token Transfer.'). This redundancy consumes space without adding value. While the subsequent 'Returns:' section is structured usefully, the repetition and em-dash formatting indicate poor editing.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter lookup tool, the description adequately compensates for the missing output schema by documenting the return object structure and key fields. However, it lacks domain context explaining what 'Native Token Transfer' refers to (likely a Wormhole-specific concept given the guardian/vaa siblings) and omits behavioral safety information expected when annotations are absent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for its single parameter ('coingecko_id of the desired token'). The description adds no additional semantic context about the parameter (such as expected format, example IDs, or where to obtain valid values), warranting the baseline score of 3 for well-documented schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states it returns a summary of the Native Token Transfer and lists specific return fields (circulatingSupply, fullyDilutedValuation, image), which adds some specificity. However, the opening sentence essentially restates the tool name (tautological), and it fails to differentiate from siblings like 'api_v1_native_token_transfer_activity' or explain what constitutes a 'Native Token Transfer' in this domain.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus the six sibling native_token_transfer tools (activity, token_list, top_address, top_holder, transfer_by_time). No mention of prerequisites, filtering capabilities, or data freshness considerations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

api_v1_native_token_transfer_token_listCInspect

[DISCOVERY] Returns the list of supported Native Token Transfer tokens. — Returns the list of supported Native Token Transfer tokens.

ParametersJSON Schema
NameRequiredDescriptionDefault
withLinksYesSpecify true/false if the response includes links.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description fails to disclose behavioral traits beyond the basic read operation implied by 'Returns'. It omits details about rate limits, authentication requirements, or response structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Suffers from severe redundancy by repeating the exact same sentence twice ('Returns the list...'). The '[DISCOVERY]' prefix appears to be metadata clutter rather than descriptive content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the low complexity (single parameter, no nested objects, no output schema), the description is minimally adequate but fails to explain the domain concept of 'Native Token Transfer' or provide context for interpreting the results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description adds no supplementary information about the 'withLinks' parameter or its intended usage patterns.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States a specific verb ('Returns') and resource ('list of supported Native Token Transfer tokens'), distinguishing it from siblings like 'guardians_token_list' or 'get_secured_tokens' by specifying the Native Token Transfer domain.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternative token listing endpoints (e.g., 'get_secured_tokens', 'guardians_token_list') or prerequisites for invocation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

api_v1_native_token_transfer_top_addressBInspect

Returns a list of values (tx count or notional) of the Native Token Transfer for address. — Returns a list of values (tx count or notional) of the Native Token Transfer for address.

PREREQUISITE: Call one of [get_last_transactions, api_v1_native_token_transfer_token_list, get_operations] first to discover valid identifiers.

ParametersJSON Schema
NameRequiredDescriptionDefault
byNoRenders the results using notional or tx count (default is notional).
symbolYesSymbol of the native-token-transfer token.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the prerequisite dependency chain and implies a read-only operation ('Returns'), but omits other behavioral traits such as pagination, rate limits, error handling for invalid symbols, or the specific structure of the returned list.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description inefficiently repeats the same sentence verbatim ('Returns a list of values... for address.'). This redundancy wastes space without adding value. The prerequisite section is useful but the overall structure is poor due to the unnecessary duplication.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Without an output schema, the description partially compensates by stating the tool returns a list of values (tx count/notional), but fails to describe the data structure (e.g., whether it includes addresses, timestamps, or ranking). Given the 100% input schema coverage, the description meets minimum viability but has clear gaps regarding return value semantics.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description mentions 'tx count or notional' which aligns with the `by` parameter options, but adds no additional semantic context (e.g., valid string formats for symbol, case sensitivity) beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description mentions the resource (Native Token Transfer) and return metrics (tx count or notional) but is ambiguous about the 'address' aspect—whether it returns a list of addresses or requires an address parameter. Given the parameter is `symbol`, the relationship to 'address' in the name and description is unclear, and it fails to distinguish from siblings like `api_v1_native_token_transfer_top_holder`.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit prerequisite guidance naming specific tools (`get_last_transactions`, `api_v1_native_token_transfer_token_list`, `get_operations`) that must be called first to discover valid identifiers. However, it does not clarify when to use this tool versus similar aggregation tools like `top_holder` or `activity`.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

api_v1_native_token_transfer_top_holderCInspect

Returns a list of volume and chain of the Native Token Transfer for top holders. — Returns a list of volume and chain of the Native Token Transfer for top holders.

ParametersJSON Schema
NameRequiredDescriptionDefault
coingecko_idYesCoingecko_id of the native-token-transfer token.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but omits critical details such as what constitutes 'top' (top 10, 100?), the time period for volume calculations, pagination behavior, or rate limits. While it mentions returning 'volume and chain,' it does not describe the response structure or error states.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of the same sentence repeated verbatim with an em-dash separator ('Returns a list... — Returns a list...'), which constitutes redundant waste rather than efficient communication. While the individual sentence length is appropriate, the duplication violates the principle that every sentence must earn its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given this is a financial data retrieval tool without an output schema or annotations, the description lacks essential context such as the number of holders returned, time ranges for volume data, and specific data fields included. It should compensate for missing structured metadata by explaining the return format, but provides only minimal resource identification.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for the single coingecko_id parameter, which is documented as 'Coingecko_id of the native-token-transfer token.' The tool description does not add supplementary context about valid ID formats, example values, or lookup methods, meeting the baseline expectation when schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool 'Returns a list of volume and chain of the Native Token Transfer for top holders,' identifying the verb and resource. However, it fails to distinguish this tool from the sibling api_v1_native_token_transfer_top_address, leaving ambiguity about the difference between 'holders' and 'addresses'. The identical repetition after the em-dash adds no clarifying value.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus sibling alternatives like api_v1_native_token_transfer_activity or api_v1_native_token_transfer_top_address. There are no stated prerequisites, conditions for use, or exclusion criteria to help the agent determine appropriate invocation context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

api_v1_native_token_transfer_transfer_by_timeBInspect

Returns a list of values (tx count or notional) of the Native Token Transfer for a emitter and destination chains. — Returns a list of values (tx count or notional) of the Native Token Transfer for a emitter and destination chains.

PREREQUISITE: Call one of [get_last_transactions, api_v1_native_token_transfer_token_list, get_operations] first to discover valid identifiers.

ParametersJSON Schema
NameRequiredDescriptionDefault
byNoRenders the results using notional or tx count (default is notional).
toYesTo date, supported format 2006-01-02T15:04:05Z07:00
fromYesFrom date, supported format 2006-01-02T15:04:05Z07:00
symbolYesSymbol of the native-token-transfer token.
timeSpanYesTime Span, supported values: [1h, 1d, 1mo, 1y].
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries the full burden. It discloses that the tool returns a list and implies it's read-only via 'Returns,' but lacks details on response format, pagination, error states, or rate limiting that would help an agent handle the output correctly.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description repeats the exact same sentence twice ('Returns a list of values...'), which is wasteful and confusing rather than concise. While the PREREQUISITE section is well-structured, the repetition significantly degrades quality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The prerequisite workflow is well-documented given the complexity of identifier discovery, but the description fails to clarify the output structure (no output schema exists) and the parameter mismatch regarding chains vs. time ranges leaves gaps in understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While the schema has 100% coverage, the description adds confusing context by referencing 'emitter and destination chains' which suggests chain ID parameters that don't exist in the schema (which uses from/to for dates). This creates a mismatch between the narrative description and actual temporal parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Returns a list of values (tx count or notional)' which specifies the verb and metrics, but it confusingly mentions 'emitter and destination chains' which doesn't align with the time-based filtering parameters (from/to dates) in the schema.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Excellent explicit guidance via the 'PREREQUISITE' section, which names three specific sibling tools (`get_last_transactions`, `api_v1_native_token_transfer_token_list`, `get_operations`) that must be called first to discover valid identifiers, clearly establishing the dependency chain.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

api_v1_top_100_corridorsCInspect

Returns a list of the top 100 tokens, sorted in descending order by the number of transactions. — Returns a list of the top 100 tokens, sorted in descending order by the number of transactions. Returns: { corridors: { emitter_chain: number, target_chain: number, token_address: string, token_chain: number, txs: number }[] }.

ParametersJSON Schema
NameRequiredDescriptionDefault
timeSpanNoTime span, supported values: 2d and 7d (default is 2d).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries the full burden. It partially compensates by disclosing the return structure (the corridors array schema), which is not present in structured fields. However, it omits operational details like rate limits, authentication requirements, or whether the data is cached/real-time.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description suffers from severe redundancy, repeating the exact same sentence twice before appending the return type documentation. This repetition wastes tokens without adding value and indicates poor editing.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While the return structure is documented inline (compensating for the lack of structured output schema), the description fails to clarify the domain-specific 'corridor' concept (emitter+target chain pairs) versus simple token listings. Given the single well-documented parameter, this is minimally adequate but could clarify terminology.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for the single timeSpan parameter, establishing a baseline score of 3. The description text adds no additional semantic context about the parameter (e.g., examples of '2d' format, impact on aggregation window) but does not need to compensate for schema gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description specifies a concrete resource (top 100 tokens) and sort order (descending by transactions), but creates confusion by calling the resource 'tokens' when the tool name and return structure indicate 'corridors' (cross-chain token flows with emitter/target chain pairs). It fails to differentiate from siblings like 'get_top_chain_pairs_by_num_transfers' or 'get_top_assets_by_volume'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus the numerous similar ranking/listing siblings (e.g., 'get_top_assets_by_volume', 'x_chain_activity_tops'). There is no mention of the optional timeSpan parameter's impact on data relevance or caching behavior.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

application_activityCInspect

Search for a specific period of time the number of transactions and the volume per application. — Search for a specific period of time the number of transactions and the volume per application.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesTo date, supported format 2006-01-02T15:04:05Z07:00
fromYesFrom date, supported format 2006-01-02T15:04:05Z07:00
appIdsNoSearch by appId
timespanYesTime span, supported values: 1d, 1mo and 1y
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full disclosure burden. It mentions returning transaction counts and volume, but provides no information about pagination, rate limits, authentication requirements, or whether this is a read-only operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of the exact same sentence repeated twice separated by an em-dash. This redundancy wastes tokens without adding value. A single concise sentence would earn a higher score.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 100% schema coverage, the description is minimally adequate. However, with no output schema provided, the description misses the opportunity to describe the return structure (e.g., whether it returns aggregated totals, time-series data, or application metadata).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description references 'specific period of time' (aligning with timespan/from/to) but adds no semantic detail about the 'appIds' parameter or date format specifics beyond what the schema already documents.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb (Search) and resource (number of transactions and volume per application). It identifies the scope is 'per application' which distinguishes it from general transaction searches, though it could more explicitly differentiate from sibling 'x_chain_activity'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this versus alternatives like 'x_chain_activity' or 'get_operations'. No mention of prerequisites or when the tool is inappropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

circulating_supplyCInspect

Get W token circulation supply. — Get W token circulation supply.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description must carry full behavioral disclosure. It only indicates a read operation ('Get') but fails to describe the return format, data freshness, or potential errors.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description repeats the same sentence twice ('Get W token circulation supply'), which is wasteful redundant structure rather than efficient conciseness. Every sentence should earn its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of closely related siblings (total_supply, supply_info) and lack of output schema, the description should explain the specific semantics of 'circulation supply' and return value structure, which it omits.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has zero parameters, establishing a baseline score of 4. The description appropriately requires no parameter explanation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool retrieves 'W token circulation supply' which adds specificity (W token) beyond the tool name, but fails to define what 'circulation supply' means or how it differs from the sibling 'total_supply' tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus siblings like 'total_supply' or 'supply_info', nor any mention of prerequisites or error conditions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_address_by_idCInspect

Lookup an address — Lookup an address Returns: { data: { vaas: { digest: string, emitterAddr: string, emitterChain: unknown, emitterNativeAddr: string, guardianSetIndex: number, id: string, ... }[] }, pagination: { next: string } }.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number. Starts at 0.
addressYesaddress
pageSizeNoNumber of elements per page.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the raw return JSON structure inline (data.vaas array with pagination), which helps agents understand the data shape. However, it omits auth requirements, rate limits, or whether the lookup is case-sensitive. No contradiction with annotations since none are present.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The repetition of 'Lookup an address' is wasteful rather than concise. The em-dash separation is awkward. While the description is short, every sentence does not earn its place due to the immediate repetition.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema in structured metadata, the description compensates partially by including inline return documentation. However, it fails to explain the domain relationship between the input 'address' and the returned 'vaas', missing critical context for an agent to understand what this tool actually retrieves.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (page, address, pageSize all described), establishing a baseline of 3. The description adds the return schema context which implicitly clarifies that the 'address' parameter filters VAAs, but provides no additional semantic depth on address formats or valid values beyond what the schema states.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description repeats 'Lookup an address' verbatim twice with an em-dash, which is effectively tautological and wastes space. While the verb 'Lookup' and resource 'address' are present, it fails to specify what kind of address (emitter, guardian, contract?) or distinguish from siblings like 'find_vaas_by_emitter', leaving ambiguity given the tool returns VAAs, not address metadata.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus the many sibling lookup tools (e.g., 'find_vaas_by_emitter', 'find_vaas_by_chain'). Does not clarify prerequisites, required address formats, or pagination strategies despite the presence of page parameters.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_all_vaasCInspect

Returns all VAAs. Output is paginated and can also be be sorted. — Returns all VAAs. Output is paginated and can also be be sorted. Returns: { data: { digest: string, emitterAddr: string, emitterChain: number, emitterNativeAddr: string, guardianSetIndex: number, id: string, ... }[], pagination: { next: string } }.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number.
txHashNoTransaction hash of the VAA
pageSizeNoNumber of elements per page.
sortOrderNoSort results in ascending or descending order.
parsedPayloadNoinclude the parsed contents of the VAA, if available
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It usefully documents the return structure inline (data array with specific fields, pagination object) compensating for the lack of output schema. However, it omits safety characteristics (read-only vs destructive), rate limits, or performance implications of 'all' queries.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Suffers from obvious repetition: 'Returns all VAAs. Output is paginated and can also be be sorted.' appears twice separated by an em-dash. Contains typo ('be be'). The Returns block is dense but necessary given no output schema; however, the duplication wastes tokens without adding value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a 5-parameter list endpoint with good schema coverage. Inline return documentation provides necessary output contract. However, given the crowded sibling namespace (many find_vaa* variants), the description should explicitly state this is the unfiltered entry point versus filtered alternatives.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage (page, txHash, pageSize, sortOrder, parsedPayload all documented). Description mentions pagination and sorting (mapping to page/pageSize/sortOrder) but doesn't add semantic context beyond schema—e.g., doesn't explain that txHash filters the 'all' set to a specific transaction or that parsedPayload includes decoded contents.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

States it 'Returns all VAAs' matching the tool name, identifying the resource and action. However, it doesn't distinguish from siblings like find_vaas_by_chain or find_vaas_by_emitter (which filter) vs this one (which ostensibly lists all), nor does it clarify that the txHash parameter enables specific filtering despite the 'all' label.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this broad listing tool versus the specific filtering siblings (e.g., find_vaas_by_chain). Mentions pagination and sorting capabilities but doesn't advise on page size limits or when to paginate versus filter.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_delegate_observations_by_chainCInspect

Returns delegate observations for a given chain, sorted in descending timestamp order. — Returns delegate observations for a given chain, sorted in descending timestamp order.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number
chain_idYesChain ID
pageSizeNoNumber of elements per page
sortOrderNoASC or DESC
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the default sort order (descending timestamp) but fails to disclose whether this is a safe read-only operation, pagination limits, rate limiting, or the structure/format of the returned observations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of the exact same sentence repeated twice with an em-dash separator. This is redundant and wasteful, suggesting a formatting error or copy-paste mistake. A single sentence would suffice.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the high density of similar sibling tools (over 10 'find_observations' variants) and lack of output schema or annotations, the description is insufficient. It does not explain what constitutes a 'delegate observation,' how results are paginated, or how this tool fits into the broader observation-querying workflow.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage (page, chain_id, pageSize, sortOrder all documented). The description mentions 'sorted in descending timestamp order,' which hints at the default behavior for the sortOrder parameter, but adds no further semantic detail about parameter usage, valid formats, or interaction between pagination parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the verb (Returns) and resource (delegate observations for a given chain) clearly. However, it fails to distinguish this tool from the sibling 'find_observations_by_chain' (without 'delegate') or explain what distinguishes a 'delegate observation' from a regular observation. The sentence is also unnecessarily duplicated, which reduces professionalism though not clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus the numerous sibling alternatives (find_delegate_observations_by_emitter, find_delegate_observations_by_guardian, find_observations_by_chain, etc.). The agent cannot determine which filtering approach (chain vs emitter vs guardian) is appropriate for a given user query.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_delegate_observations_by_emitterCInspect

Returns delegate observations for a given chain and emitter, sorted in descending timestamp order. — Returns delegate observations for a given chain and emitter, sorted in descending timestamp order.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number
chain_idYesChain ID
pageSizeNoNumber of elements per page
sortOrderNoASC or DESC
emitter_addressYesEmitter address
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but only discloses default sort order (descending). It omits safety profile (read-only status), pagination behavior implications, rate limits, or the semantic meaning of 'delegate' in this context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of the exact same sentence duplicated with an em-dash separator. This redundancy wastes tokens without adding value, violating the principle that every sentence must earn its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Lacks domain context explaining what 'delegate observations' are, missing output structure hints (no output schema exists), and fails to clarify the relationship between this tool and the numerous similar sibling tools available on this server.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, documenting all 5 parameters including sortOrder options (ASC/DESC). The description adds no parameter-specific guidance beyond the schema, warranting the baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool returns delegate observations for a specific chain and emitter with sorting, but it repeats verbatim and fails to distinguish from siblings like 'find_observations_by_emitter' (difference between 'observations' vs 'delegate observations' is unclear) or other 'find_delegate_observations_by_*' variants.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus 'find_observations_by_emitter' or other delegate observation finders (by_chain, by_guardian, by_sequence). No prerequisites or filtering strategies mentioned despite 5 available parameters.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_delegate_observations_by_guardianCInspect

Returns delegate observations for a given chain, emitter, sequence and delegate guardian address. — Returns delegate observations for a given chain, emitter, sequence and delegate guardian address.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number
chain_idYesChain ID
pageSizeNoNumber of elements per page
sequenceYesSequence
sortOrderNoASC or DESC
emitter_addressYesEmitter address
guardian_addressYesDelegate guardian address
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden but offers minimal behavioral context. While 'Returns' implies a read-only operation, there is no disclosure of pagination mechanics (despite page/pageSize parameters), rate limits, sorting behavior, or what constitutes a 'delegate observation' versus standard observations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of the exact same sentence repeated twice separated by an em-dash, representing significant redundancy without adding information. While short in absolute terms, the repetition wastes tokens and suggests editing oversight rather than intentional brevity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a specialized query tool with 7 parameters (including pagination controls) and no output schema, the description is inadequate. It fails to explain return structure, what delegate observations represent, or how pagination behaves, leaving critical gaps given the complexity of the Wormhole/guardian domain implied by sibling tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description reinforces the four required identifier parameters by listing them explicitly ('chain, emitter, sequence and delegate guardian address'), providing slight semantic grouping value, but does not explain address formats, pagination logic, or the meaning of 'delegate' in this context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the resource (delegate observations) and the specific composite key required (chain, emitter, sequence, guardian address). However, it does not explicitly differentiate this tool from siblings like find_delegate_observations_by_chain or find_delegate_observations_by_sequence, which appear to offer broader/narrower query capabilities.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this specific guardian-filtered variant versus the other delegate observation tools (by_chain, by_emitter, by_sequence) or the general find_observations tool. No mention of prerequisites or typical use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_delegate_observations_by_sequenceCInspect

Returns delegate observations for a given chain, emitter and sequence. — Returns delegate observations for a given chain, emitter and sequence.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number
chain_idYesChain ID
pageSizeNoNumber of elements per page
sequenceYesSequence
sortOrderNoASC or DESC
emitter_addressYesEmitter address
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but offers minimal information. While 'Returns' implies a read-only operation, the description omits pagination behavior details (despite page/pageSize parameters), does not explain what constitutes a 'delegate observation,' and provides no indication of error handling or rate limiting.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description repeats the exact same sentence twice separated by an em-dash ('Returns delegate observations...'), which is redundant and wastes tokens. While the core statement is appropriately sized for a lookup tool, this repetition violates the principle that every sentence must earn its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations and output schema, the description should provide richer context about the return type and domain concepts. It fails to explain what 'delegate observations' are, how they differ from standard observations, or how to interpret the paginated results. The high schema coverage handles parameter documentation, but the description leaves significant gaps in overall tool understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, documenting all six parameters including pagination controls (page, pageSize, sortOrder) and required identifiers (chain_id, emitter_address, sequence). The description mentions 'chain, emitter and sequence' but adds no semantic value regarding formats, constraints, or relationships beyond what the schema already provides, warranting the baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states a clear verb ('Returns') and resource ('delegate observations'), and specifies the exact lookup pattern using 'chain, emitter and sequence' which distinguishes it from sibling tools like find_delegate_observations_by_chain. However, it fails to clarify what distinguishes 'delegate observations' from the related 'observations' family of tools (e.g., find_observations_by_sequence) present in the sibling list.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It does not explain the relationship between the delegate observation tools and regular observation tools, nor does it indicate prerequisites such as needing valid chain_id/emitter_address combinations or when pagination is necessary.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_duplicated_vaa_by_idCInspect

Find duplicated VAA by ID. — Find duplicated VAA by ID. Returns: { data: { digest: string, emitterAddr: string, emitterChain: number, emitterNativeAddr: string, guardianSetIndex: number, id: string, ... }[], pagination: { next: string } }.

ParametersJSON Schema
NameRequiredDescriptionDefault
seqYessequence of the VAA
emitterYesaddress of the emitter
chain_idYesid of the blockchain
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. It provides detailed return structure information (listing specific fields like digest, emitterAddr, pagination, etc.), which adds value. However, it lacks operational context such as rate limits, permission requirements, or why duplicates might occur in the system.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description opens with immediate repetition ('Find duplicated VAA by ID. — Find duplicated VAA by ID.'), which wastes space without adding value. While the appended return structure is useful, the opening redundancy and lack of clear separation between description and return documentation hurt the structure.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 3-parameter tool with 100% schema coverage but no annotations, the description partially compensates by providing the output structure shape. However, it lacks domain context (what is a VAA, what is the guardian set index) and error handling details, leaving meaningful gaps for an agent trying to understand the Wormhole domain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage (chain_id, emitter, seq are all documented). The description text adds no parameter-specific context, syntax guidance, or format requirements beyond what the schema already provides, warranting the baseline score for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with 'Find duplicated VAA by ID' repeated twice with an em-dash, which is essentially a tautology of the tool name with no added specificity. While 'duplicated' hints at the tool's scope, it fails to distinguish from the sibling tool 'find_vaa_by_id' or explain what constitutes a 'duplicated' VAA in this context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus the similar 'find_vaa_by_id' sibling, or under what conditions duplicate VAAs might exist. There are no prerequisites, exclusions, or workflow recommendations mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_global_transaction_by_idCInspect

Find a global transaction by VAA ID Global transactions is a logical association of two transactions that are related to each other by a unique VAA ID. The first transaction is created on the origin chain when the VAA is emitted. The second transaction is created on the destination chain when the VAA is redeemed. If the response only contains an origin tx the VAA was not redeemed. — Find a global transaction by VAA ID Global transactions is a logical association of two transactions that are related to each other by a unique VAA ID. The first transaction is created on the origin chain when the

ParametersJSON Schema
NameRequiredDescriptionDefault
seqYessequence of the VAA
emitterYesaddress of the emitter
chain_idYesid of the blockchain
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully explains the two-phase nature of global transactions (origin emission vs. destination redemption) and implied lifecycle states. However, it omits critical operational details like read-only status, rate limits, or error conditions that annotations would typically cover.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description suffers from a severe editing error where the text is duplicated ('— Find a global transaction by VAA ID Global transactions...') and truncated. This repetition makes it inefficient and confusing rather than concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (cross-chain transaction association) and lack of output schema, the description adequately explains the domain model (origin vs. destination transactions). However, the duplication error and lack of sibling differentiation prevent it from being fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (chain_id, emitter, seq are well-documented). The description mentions querying by 'VAA ID' but does not explicitly clarify that the three parameters compose this VAA ID, leaving a slight semantic gap despite the schema's adequacy.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the core action ('Find a global transaction') and defines the resource (logical association of origin/destination transactions by VAA ID). However, it fails to distinguish this tool from siblings like `get_transaction_by_id` or `find_vaa_by_id`, leaving ambiguity about when to use which.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides one useful behavioral hint ('If the response only contains an origin tx the VAA was not redeemed') for interpreting partial results. However, it lacks any explicit guidance on when to use this tool versus the numerous related query tools (e.g., `find_transaction_by_id`, `find_vaa_by_id`).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_observationsCInspect

Returns all observations, sorted in descending timestamp order. — Returns all observations, sorted in descending timestamp order.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number.
txHashNoTransaction hash of the Observations
pageSizeNoNumber of elements per page.
sortOrderNoSort results in ascending or descending order.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the default sort field (timestamp) and direction (descending), but omits pagination behavior details, rate limiting, authentication requirements, or what constitutes an 'observation' in this domain.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description contains a clear duplication error, repeating the same sentence twice separated by an em-dash. This structural defect wastes tokens and suggests poor formatting, reducing the score despite the brevity of the content itself.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complex ecosystem of 60+ sibling tools and no output schema, the description is insufficient. It fails to explain what 'observations' represent, how pagination interacts with the 'all' scope, or how this tool relates to the VAAs (Verified Action Approvals) and transaction tools in the same API surface.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 100% schema description coverage, the description adds valuable context by specifying that sorting is by 'timestamp'—information absent from the sortOrder parameter description which only mentions ascending/descending order without specifying the sort field. This clarifies the default behavior and sort key.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the verb (Returns) and resource (observations) clearly, including the default sort order. However, with numerous sibling filtering tools (find_observations_by_chain, by_emitter, by_id, by_sequence), the description fails to clarify when to use this broad 'all' variant versus the specific filters.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus the sibling observation filters is provided. While the word 'all' implies unfiltered retrieval, there is no discussion of pagination strategies, when to use txHash filtering versus other endpoints, or prerequisites for use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_observations_by_chainCInspect

Returns all observations for a given blockchain, sorted in descending timestamp order. — Returns all observations for a given blockchain, sorted in descending timestamp order.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number.
pageSizeNoNumber of elements per page.
sortOrderNoSort results in ascending or descending order.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description must carry the full burden. It mentions results are sorted in 'descending timestamp order,' but this contradicts the 'sortOrder' parameter which allows ascending or descending. It fails to disclose pagination behavior, rate limits, or whether this is a read-only operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is exactly duplicated with an em-dash separator ('Returns all observations... — Returns all observations...'), indicating a structural error. While the individual sentence is concise, the repetition wastes tokens and appears unprofessional.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description lacks output schema information and fails to resolve the critical gap between the claimed 'by_chain' functionality and the missing chain parameter in the input schema. For a query tool with multiple filtering siblings, it provides insufficient context to use correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While the schema has 100% description coverage (baseline 3), the description detracts value by claiming a fixed descending sort order when the schema explicitly allows configurable sortOrder. It also references filtering by 'blockchain' without indicating how this parameter is provided, creating confusion.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool returns observations filtered by blockchain, which aligns with the tool name. However, it does not differentiate from similar sibling tools like 'find_observations' or 'find_delegate_observations_by_chain', and the claim of filtering by blockchain contradicts the input schema which lacks a chain parameter.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'find_observations_by_emitter' or 'find_observations_by_id', nor does it mention prerequisites such as how the blockchain is specified.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_observations_by_emitterCInspect

Returns all observations for a specific emitter address, sorted in descending timestamp order. — Returns all observations for a specific emitter address, sorted in descending timestamp order.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number.
pageSizeNoNumber of elements per page.
sortOrderNoSort results in ascending or descending order.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description must carry the full burden of behavioral disclosure and successfully notes the default descending timestamp sort order. However, it omits other critical context such as pagination limits, rate limiting, or the structure/format of the returned observations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness1/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of the exact same sentence repeated twice with an em-dash separator, constituting a 100% redundancy that fails to earn its place or provide any additional information. This structure is wasteful and unprofessional.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the missing emitter address parameter in the schema and the absence of an output schema, the description fails to adequately document the core filtering mechanism or explain what data structure is returned. The repetition error further detracts from the completeness of the documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While the schema has 100% description coverage for its three parameters, the description references filtering by 'specific emitter address' which is not present in the provided input schema, creating a confusing disconnect. The description adds no additional semantic value to the existing pagination parameters beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns observations filtered by a specific emitter address and sorted by timestamp, which distinguishes it from siblings like `find_observations_by_chain`. However, the exact duplication of the sentence reduces clarity and suggests an editing error.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives such as `find_observations_by_chain` or `find_delegate_observations_by_emitter`. It also fails to mention prerequisites, such as requiring a valid emitter address to function.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_observations_by_idCInspect

Find a specific observation. — Find a specific observation.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number.
pageSizeNoNumber of elements per page.
sortOrderNoSort results in ascending or descending order.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It fails to explain what constitutes an observation, what data is returned, or why pagination controls are present for a 'specific' lookup. It implies a read-only operation ('Find') but provides no details on rate limits or auth requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of the same phrase repeated with an em-dash ('Find a specific observation. — Find a specific observation.'), which is redundant and wastes tokens without adding information. While brief, the repetition indicates poor structure rather than efficient conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the crowded sibling namespace (multiple find_observations variants) and the confusing mismatch between the tool name (find_by_id) and the actual pagination parameters, the description should explain this tool's specific role and parameter logic. With no output schema or annotations, the description is insufficiently complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for its three pagination parameters (page, pageSize, sortOrder), establishing a baseline of 3. The description adds no additional semantic context (e.g., default page sizes, sortable fields), but the schema descriptions are sufficient for basic understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description tautologically restates the tool name ('Find a specific observation') without explaining what an 'observation' is or distinguishing this from siblings like find_observations, find_observations_by_chain, etc. The name implies lookup by ID, yet the schema contains only pagination parameters, which the description fails to reconcile.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus the numerous sibling observation-finding tools (find_observations, find_observations_by_chain, find_observations_by_emitter, etc.). No explanation for why pagination parameters exist for a tool ostensibly designed to find a specific item by ID.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_observations_by_sequenceCInspect

Find observations identified by emitter chain, emitter address and sequence. — Find observations identified by emitter chain, emitter address and sequence.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number.
pageSizeNoNumber of elements per page.
sortOrderNoSort results in ascending or descending order.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the lookup mechanism but omits whether results are paginated (despite pagination parameters existing), what constitutes an 'observation,' or any error conditions. It implies a read-only operation but does not confirm safety or idempotency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of the same sentence repeated verbatim, separated only by an em-dash. This is redundant and wasteful despite being brief. Every sentence should earn its place, and the repetition adds no value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of Wormhole observations and the lack of an output schema, the description is insufficient. It does not explain the return structure, the relationship between observations and VAAs, or the significance of the sequence number in the context of the emitter chain/address pair.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While the input schema shows only generic pagination parameters (page, pageSize, sortOrder) with 100% coverage, the description adds critical domain-specific semantics by identifying the actual filter criteria: emitter chain, emitter address, and sequence. This compensates for the schema's lack of domain parameter documentation, though the mismatch creates confusion.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the core action (Find observations) and the composite identifier (emitter chain, emitter address, sequence), which clarifies the tool's purpose. However, it fails to differentiate from siblings like `find_observations_by_emitter` or `find_observations_by_chain`, and the exact repetition of the sentence reduces clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus the numerous sibling observation-finding tools (e.g., `find_observations_by_id`, `find_observations_by_emitter`). There are no prerequisites, exclusions, or alternative recommendations mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_relay_by_vaa_idCInspect

Get a specific relay information by chainID, emitter address and sequence. — Get a specific relay information by chainID, emitter address and sequence. Returns: { completedAt: string, data: { delivery: { budget: string, execution: { detail: unknown, gasUsed: unknown, refundStatus: unknown, revertString: unknown, status: unknown, transactionHash: unknown }, maxRefund: string, relayGasUsed: number, targetChainDecimals: number }, fromTxHash: string, instructions: { encodedExecutionInfo: string, extraReceiverValue: { _hex: string, _isBigNumber: boolean }, refundAddress: string, refundChainId: numb

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist to indicate safety or destructiveness. The description compensates partially by documenting the return value structure (nested delivery, execution, and instructions objects), though this information is truncated mid-word ('numb') and improperly embedded in the description text rather than a formal output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is inefficiently structured with immediate repetition ('Get a specific relay information...' appears twice consecutively). The appended return value documentation is a raw JSON blob that is both truncated and poorly formatted, consuming space without providing complete information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of relay data (nested execution details, refund status, delivery budgets) and the empty input schema, the description inadequately explains invocation requirements. The return documentation, while valuable, is incomplete due to truncation and should reside in a proper output schema rather than the description field.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While the schema has 0 parameters (which typically warrants a baseline of 4), the description actively misleads by referencing three specific input parameters (chainID, emitter address, sequence) that do not appear in the schema. This creates confusion about whether the tool requires these inputs and how they should be formatted or transmitted.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

States it retrieves 'relay information' using 'chainID, emitter address and sequence', which distinguishes it from sibling tools like find_vaa_by_id (which finds VAAs, not relays). However, the description contradicts the empty input schema by claiming specific parameters are required, creating uncertainty about invocation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives like find_vaa_by_id, find_address_by_id, or find_all_vaas. Does not clarify the relationship between VAA IDs and relay data, nor explain how to provide the referenced identifiers given the empty parameter schema.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_vaa_by_idCInspect

Find a VAA by ID. — Find a VAA by ID. Returns: { data: { digest: string, emitterAddr: string, emitterChain: number, emitterNativeAddr: string, guardianSetIndex: number, id: string, ... }[], pagination: { next: string } }.

ParametersJSON Schema
NameRequiredDescriptionDefault
seqYessequence of the VAA
emitterYesaddress of the emitter
chain_idYesid of the blockchain
parsedPayloadNoinclude the parsed contents of the VAA, if available
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It documents the return structure (including the data array and pagination object) which is necessary since no output schema exists, but it fails to explain why a lookup 'by ID' returns an array or disclose read-only/safety characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description opens with the exact same phrase twice separated by an em-dash ('Find a VAA by ID. — Find a VAA by ID.'), which is redundant. While the return value documentation is useful, the repetition prevents a higher score.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of an output schema, the inclusion of return type documentation is appropriate and necessary. Parameter coverage is complete via the schema. However, the description leaves unexplained the semantic mismatch between a singular 'ID' lookup and the returned array structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage (all 4 parameters documented), establishing a baseline of 3. The description misses an opportunity to add value by clarifying that the 'ID' mentioned in the title is actually a composite of the three required parameters (chain_id, emitter, seq) rather than a single field.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs a lookup (Find) on a specific resource (VAA) by a specific key (ID), distinguishing it from list-based siblings like 'find_all_vaas' or 'find_vaas_by_chain'. However, the immediate repetition of the phrase is wasteful.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While 'by ID' implicitly suggests use for specific record retrieval rather than searching, the description provides no explicit guidance on when to use this versus siblings like 'find_duplicated_vaa_by_id' or 'find_vaas_by_emitter', nor does it mention that the ID is composed of multiple parameters.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_vaas_by_chainBInspect

Returns all the VAAs generated in specific blockchain. — Returns all the VAAs generated in specific blockchain. Returns: { data: { digest: string, emitterAddr: string, emitterChain: number, emitterNativeAddr: string, guardianSetIndex: number, id: string, ... }[], pagination: { next: string } }.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number.
chain_idYesid of the blockchain
pageSizeNoNumber of elements per page.
sortOrderNoSort results in ascending or descending order.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adds significant value by documenting the return structure including specific fields (digest, emitterAddr, emitterChain, etc.) and pagination format, which compensates for the lack of formal output schema. However, it does not explicitly state whether this is a read-only/safe operation or mention rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description wastefully repeats the exact same sentence twice ('Returns all the VAAs generated in specific blockchain. — Returns all the VAAs generated in specific blockchain.'). This repetition consumes space without adding value. While the return structure documentation is useful, the duplication significantly impacts conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 4 simple parameters and no formal output schema, the description adequately compensates by detailing the return payload structure including data fields and pagination. This provides sufficient context for an agent to understand what data will be returned, though it could briefly mention how to use the pagination 'next' token.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for all four parameters (page, chain_id, pageSize, sortOrder). The description mentions 'specific blockchain' which aligns with chain_id, but does not add semantic context beyond what the schema already provides (e.g., valid formats for page, whether sortOrder accepts 'asc'/'desc'). Baseline score appropriate for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it 'Returns all the VAAs generated in specific blockchain,' providing specific verb (Returns), resource (VAAs), and scope (specific blockchain). However, it does not explicitly differentiate from siblings like find_all_vaas or find_vaas_by_emitter regarding when to filter by chain versus other methods.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives such as find_all_vaas (which likely returns all VAAs across chains) or find_vaas_by_emitter. There are no prerequisites, exclusions, or workflow recommendations mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_vaas_by_emitterCInspect

Returns all all the VAAs generated by a specific emitter address. — Returns all all the VAAs generated by a specific emitter address. Returns: { data: { digest: string, emitterAddr: string, emitterChain: number, emitterNativeAddr: string, guardianSetIndex: number, id: string, ... }[], pagination: { next: string } }.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number.
emitterYesaddress of the emitter
toChainNodestination chain (deprecated param)
chain_idYesid of the blockchain
pageSizeNoNumber of elements per page.
sortOrderNoSort results in ascending or descending order.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. It manually documents the return structure (data array with VAA fields and pagination object), which is valuable since no output schema exists. However, lacks mutation semantics, auth requirements, rate limits, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Critical flaw: the entire core sentence is duplicated ('Returns all all...'). The typo 'all all' and repetition indicate poor editing. Return structure documentation is dense/buried at the end.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Compensates for missing output schema by manually documenting return fields and pagination structure. However, lacks usage context against 40+ siblings and doesn't explain domain terminology (VAA) for users unfamiliar with Wormhole-specific jargon.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage (page, emitter, chain_id, etc. all documented). Description adds no parameter-specific guidance beyond the schema, which meets the baseline for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb-resource relationship ('Returns... VAAs generated by a specific emitter address') and distinguishes from siblings like find_vaas_by_chain and find_vaa_by_id via the 'by emitter' qualifier. However, the typo 'all all' and duplicated sentence reduce professionalism and clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this versus find_all_vaas or find_vaas_by_chain. No mention of pagination workflow even though the return structure includes pagination fields. No prerequisites or filtering advice provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_last_transactionsCInspect

[DISCOVERY] Returns the number of transactions by a defined time span and sample rate. — Returns the number of transactions by a defined time span and sample rate.

ParametersJSON Schema
NameRequiredDescriptionDefault
timeSpanNoTime Span, default: 1d, supported values: [1d, 1w, 1mo]. 1mo ​​is 30 days.
sampleRateNoSample Rate, default: 1h, supported values: [1h, 1d]. Valid configurations with timeSpan: 1d/1h, 1w/1d, 1mo/1d
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. While it clarifies that the tool returns counts ('number of transactions') rather than transaction records—important given the tool name—it omits whether the return is a single aggregate or time-series buckets, cache behavior, rate limits, or what 'sample rate' specifically controls (aggregation granularity).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description suffers from severe redundancy, containing the exact same sentence twice separated by an em-dash. While the individual sentences are appropriately brief, the 100% repetition represents significant structural waste that fails to earn its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of an output schema, the description should elaborate on the return format (e.g., whether it returns a time-series array or single integer, the structure of data points). It mentions 'number of transactions' but provides insufficient detail about the response shape for an aggregation tool with temporal parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with both timeSpan and sampleRate fully documented including valid configuration pairs. The description adds no parameter-specific guidance beyond repeating the parameter names, which aligns with the baseline expectation when the schema is self-documenting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool returns transaction counts aggregated by time span and sample rate, which clarifies the aggregation nature implied by the parameters. However, it fails to differentiate from siblings like 'list_transactions' or 'x_chain_activity', and the '[DISCOVERY]' prefix appears to be metadata leakage rather than descriptive content.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this aggregation endpoint versus alternatives like 'list_transactions' (which likely returns full transaction records) or 'get_transaction_by_id'. The description lacks prerequisites, exclusions, or selection criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_operation_by_idBInspect

Find operations by ID (chainID/emitter/sequence). — Find operations by ID (chainID/emitter/sequence). Returns: { content: { executorRequest: string, payload: object, standarizedProperties: { amount: string, appIds: string[], fee: string, feeAddress: string, feeChain: number, fromAddress: string, ... } }, data: object, emitterAddress: { hex: string, native: string }, ... }.

ParametersJSON Schema
NameRequiredDescriptionDefault
seqYessequence of the VAA
emitterYesaddress of the emitter
chain_idYesid of the blockchain
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Since no annotations are provided, the description carries full behavioral disclosure burden. It compensates for the lack of output schema by providing detailed return value structure including nested objects (content, standarizedProperties, emitterAddress). However, it omits error handling, authentication requirements, and rate limiting information.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description contains significant redundancy: the exact phrase 'Find operations by ID (chainID/emitter/sequence)' appears twice separated by an em-dash. This wastes tokens and suggests an editing error. The return value documentation is appropriately detailed but the front-loading is botched by the duplication.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 3 required parameters and complex nested return structure (documented inline since no output schema exists), the description covers the mechanical aspects adequately. However, it fails to explain domain concepts—what constitutes an 'operation' in this Wormhole context and how it relates to VAAs or transactions—which would help the agent select the correct tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with basic descriptions. The description adds value by clarifying that the three parameters form a composite ID (chainID/emitter/sequence), which helps the agent understand these are not independent filters but a tuple identifier. It doesn't add syntax details beyond the schema, but the composite ID concept is valuable context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool finds 'operations by ID' and specifies the composite key structure (chainID/emitter/sequence). However, it repeats the same phrase twice ('Find operations by ID... — Find operations by ID...'), and doesn't clarify how 'operations' differ from similar sibling concepts like 'vaas' (find_vaa_by_id) or 'transactions' (get_transaction_by_id).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives like get_operations (list vs. single), search_operations, or find_vaa_by_id. Given the many sibling tools with similar ID-based lookup patterns, explicit differentiation is missing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_operationsCInspect

[DISCOVERY] Find all operations. — Find all operations.

ParametersJSON Schema
NameRequiredDescriptionDefault
toNoend of period
fromNobeginning of period
pageNopage number
appIdNoappID of the operation
txHashNohash of the transaction
addressNoaddress of the emitter
pageSizeNopageSize
minAmountNomore than an amount in USD for address search
addressTypeNoindicates the type of address for address search
sourceChainNosource chains of the operation, separated by comma
targetChainNotarget chains of the operation, separated by comma
includesChainNochains to match either source or target, separated by comma
exclusiveAppIdNosingle appId of the operation
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden but discloses almost nothing about behavior. It doesn't explain pagination (despite page/pageSize params), rate limiting, result ordering, or what the 'DISCOVERY' classification implies about result sets.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief, the description wastes space by repeating 'Find all operations' twice. The em-dash structure creates redundancy rather than adding information, and the '[DISCOVERY]' tag is unstructured metadata that isn't integrated into explanatory text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 13 optional parameters and no output schema, the description is inadequate. It fails to explain the filtering capabilities (chains, amounts, addresses), pagination strategy, or what data structure to expect in return, leaving agents without guidance for a complex query interface.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description adds no additional context about parameter relationships (e.g., from/to date ranges, address filtering combinations) or data formats, but the schema adequately documents individual fields.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description tautologically restates 'Find all operations' twice with a '[DISCOVERY]' tag. While it identifies the verb and resource, it fails to distinguish this tool from the sibling 'search_operations' or clarify what constitutes an 'operation' in this blockchain context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this broad discovery tool versus the more specific 'search_operations' or 'get_operation_by_id'. The '[DISCOVERY]' prefix is unexplained, and there is no mention of how to use the 13 optional filter parameters effectively.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_protocol_network_pairsCInspect

[DISCOVERY] Returns the most used network pairs (chains) for a given protocol ranked by USD volume transferred The endpoint automatically selects hourly or daily aggregation based on the time range For ranges < 24 hours within the last 30 days, hourly data is used Otherwise, daily data is used — Returns the most used network pairs (chains) for a given protocol ranked by USD volume transferred The endpoint automatically selects hourly or daily aggregation based on the time range For ranges < 24 hours within the last 30 days, hourly data is used Otherwise, daily data is used Returns: { data: { from: string,

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesEnd date in RFC3339 format (e.g., 2024-01-08T00:00:00Z)
fromYesStart date in RFC3339 format (e.g., 2024-01-01T00:00:00Z)
protocolIdYesProtocol ID (e.g., PORTAL_TOKEN_BRIDGE, CCTP_WORMHOLE_INTEGRATION)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full responsibility for behavioral disclosure. It successfully explains the dynamic aggregation logic (hourly for <24h within last 30 days, otherwise daily). However, it omits information about authentication requirements, rate limiting, error handling for invalid protocolIds, or whether the data is real-time or cached.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description contains significant repetition—the entire aggregation explanation appears twice. It also ends abruptly with a truncated JSON fragment ('Returns: { data: { from: string,'). This duplication and truncation indicate poor structure and editing, wasting tokens and creating confusion.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description inadequately documents return values. It begins to describe the return structure but is cut off mid-definition. For a data retrieval tool with three required parameters, the description should fully explain the response format, field meanings, and potential error states, which it fails to do.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema adequately documents the RFC3339 date format and protocol ID examples. The description adds value by explaining how the 'from' and 'to' parameters interact with the aggregation behavior (hourly vs daily), but does not elaborate on valid protocolId values or constraints beyond the schema examples.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action (Returns), resource (network pairs/chains), and ranking metric (USD volume transferred). Includes '[DISCOVERY]' categorization. However, it does not explicitly differentiate from sibling tool 'get_top_chain_pairs_by_num_transfers' which appears to serve a similar ranking purpose but using transfer count rather than USD volume.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no explicit guidance on when to select this tool versus the numerous sibling tools available (e.g., get_top_chain_pairs_by_num_transfers, get_protocol_trending_tokens). While it explains the automatic aggregation behavior based on time range, it lacks 'when to use' or 'when not to use' recommendations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_scorecardsBInspect

[DISCOVERY] Returns a list of KPIs for Wormhole. TVL is total value locked by token bridge contracts in USD. Volume is the all-time total volume transferred through the token bridge in USD. 24h volume is the volume transferred through the token bridge in the last 24 hours, in USD. Total Tx count is the number of transaction bridging assets since the creation of the network (does not include Pyth or other messages). 24h tx count is the number of transaction bridging assets in the last 24 hours (does not include Pyth or other messages). Total messages is the number of VAAs emitted since the creation of the

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries the full burden. It provides valuable domain-specific context by defining what each KPI represents and noting important exclusions (Pyth messages). However, it lacks operational details (auth requirements, caching, rate limits) and is truncated mid-sentence ('since the creation of the '), leaving the 'Total messages' definition incomplete.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The [DISCOVERY] tag provides useful categorization at the front. The KPI definitions are structured consistently, but the description suffers from being truncated mid-sentence. The repetitive 'is the' structure for each metric is clear but could be more compact.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema and no annotations, the description should explain the return structure. While it lists the conceptual fields returned (TVL, Volume, etc.), it does not describe the data structure, types, or pagination. The truncation further reduces completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters (empty object), which per guidelines establishes a baseline of 4. No additional parameter semantics are needed or provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it 'Returns a list of KPIs for Wormhole' and distinguishes itself from siblings (which focus on specific transactions, VAAs, or observations) by defining aggregate metrics like TVL, Volume, and transaction counts. Specific definitions for each metric (e.g., 'does not include Pyth or other messages') clarify scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The '[DISCOVERY]' prefix provides a weak hint about usage context, but there is no explicit guidance on when to use this versus the many sibling query tools (e.g., get_last_transactions, find_vaa_by_id). No alternatives or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_secured_tokensCInspect

[DISCOVERY] Returns a list of tokens secured by Wormhole, including Native Token Transfers and Portal Bridge tokens — Returns a list of tokens secured by Wormhole, including Native Token Transfers and Portal Bridge tokens

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are absent, placing full burden on description. Only states it returns a list; omits pagination behavior, response format, rate limits, or data freshness. No disclosure of whether this is a static snapshot or live data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description repeats the exact same sentence twice separated by an em-dash. The '[DISCOVERY]' prefix appears to be metadata leakage. Severe redundancy means half the text earns zero value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, so description should ideally characterize return values. Identifies token categories (NTT/Portal Bridge) helpfully, but duplication and lack of behavioral detail leave gaps for a retrieval tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema contains zero parameters, establishing baseline score of 4 per rubric. No parameters to document, though the empty nature of the endpoint could have been acknowledged (e.g., 'returns all secured tokens without filtering').

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'Returns' and specific resource 'tokens secured by Wormhole'. Mentioning 'Native Token Transfers and Portal Bridge tokens' distinguishes from sibling token lists like guardians_token_list. However, exact sentence duplication suggests editing error rather than precision.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this versus similar sibling tools (e.g., api_v1_native_token_transfer_token_list, guardians_token_list, get_protocol_trending_tokens). No prerequisites, filters, or contextual triggers specified.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_token_by_chain_and_addressCInspect

Returns a token symbol, coingecko id and address by chain and token address. — Returns a token symbol, coingecko id and address by chain and token address. Returns: { coingeckoId: string, decimals: number, symbol: string }.

ParametersJSON Schema
NameRequiredDescriptionDefault
chain_idYesid of the blockchain
token_addressYestoken address
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It partially compensates by documenting the return structure ({ coingeckoId, decimals, symbol }), which adds value. However, it omits mutation status, rate limits, error handling, or cache behaviors that would help the agent understand operational constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description repeats the exact same sentence twice ('Returns a token symbol...') separated only by an em-dash, which is wasteful redundancy. While the return value documentation at the end is useful, the structure violates the 'every sentence earns its place' principle.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple 2-parameter lookup tool, the description is minimally adequate. It manually documents the return shape since no output schema exists in structured fields. However, it lacks coverage of error scenarios (e.g., invalid address format, unsupported chain) and doesn't enumerate supported chains, leaving gaps for an agent to handle edge cases.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage ('id of the blockchain', 'token address'), so the baseline is 3. The description does not add additional semantic context (e.g., expected address format, supported chain values) beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns token metadata (symbol, coingeckoId, address) using chain and token address as inputs. However, it repeats the same clause twice with an em-dash, and could better differentiate from sibling list-based token tools like get_protocol_trending_tokens or guardians_token_list.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this versus alternatives (e.g., when to use get_secured_tokens vs this lookup). No mention of prerequisites like valid chain IDs or address formats. The agent receives no signals about error cases or not-found behaviors.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_top_assets_by_volumeCInspect

Returns a list of emitter_chain and asset pairs with ordered by volume. The volume is calculated using the notional price of the symbol at the day the VAA was emitted. — Returns a list of emitter_chain and asset pairs with ordered by volume. The volume is calculated using the notional price of the symbol at the day the VAA was emitted. Returns: { assets: { emitterChain: number, symbol: string, tokenAddress: string, tokenChain: number, volume: string }[] }.

ParametersJSON Schema
NameRequiredDescriptionDefault
timeSpanYesTime span, supported values: 7d, 15d, 30d.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. It adds valuable behavioral context by explaining the volume calculation method (using notional price at VAA emission time) and compensates for the missing output schema by documenting the return structure. However, it omits other critical behavioral traits such as rate limits, data freshness, caching behavior, or pagination.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description contains a major structural defect: the primary explanatory sentence is duplicated word-for-word with only an em-dash separator. This redundancy wastes tokens and suggests an error. The return type declaration is appended awkwardly at the end, creating a cluttered structure.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple single-parameter retrieval tool, the description is minimally adequate. It compensates for the lack of output schema by including the JSON return structure inline and clarifies volume semantics. However, the duplication error and lack of sibling differentiation prevent a higher score.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with the timeSpan parameter fully documented as 'Time span, supported values: 7d, 15d, 30d'. The description adds no semantic information about the parameter, but the baseline score of 3 is appropriate given the high schema coverage carries the semantic load.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states it returns emitter_chain and asset pairs ordered by volume and explains the volume calculation methodology. However, it suffers from severe sentence duplication (repeating the same sentence verbatim) and fails to differentiate from sibling tool 'top_symbols_by_volume', which appears to serve a highly similar purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus functionally similar alternatives like 'top_symbols_by_volume' or 'get_top_chain_pairs_by_num_transfers'. No prerequisites, filtering limitations, or contextual triggers are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_top_chain_pairs_by_num_transfersCInspect

Returns a list of the emitter_chain and destination_chain pair ordered by transfer count. — Returns a list of the emitter_chain and destination_chain pair ordered by transfer count. Returns: { chainPairs: { destinationChain: number, emitterChain: number, numberOfTransfers: string }[] }.

ParametersJSON Schema
NameRequiredDescriptionDefault
timeSpanYesTime span, supported values: 7d, 15d, 30d.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. While it manually documents the return type structure (helpful since no output schema exists), it omits critical behavioral context: whether this is read-only, cached vs. real-time, what constitutes a 'transfer', or pagination behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Critical flaw: the exact same sentence is repeated twice with only an em-dash between them ('Returns a list...transfer count. — Returns a list...transfer count.'). This is wasteful repetition, though the inclusion of return type documentation is structurally useful given the lack of output schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Manually documents return structure (chainPairs array with destinationChain/emitterChain/numberOfTransfers), compensating somewhat for missing output schema. However, lacks domain context explaining what emitter/destination chains represent in this ecosystem and provides no behavioral annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage for the timeSpan parameter ('Time span, supported values: 7d, 15d, 30d'). The description adds no additional parameter context, meeting the baseline score of 3 for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the resource (emitter_chain/destination_chain pairs) and action (returns list ordered by transfer count). However, it fails to distinguish from similar sibling tools like api_v1_top_100_corridors or x_chain_activity_tops which may overlap in functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives. No mention of prerequisites, time constraints, or filtering capabilities beyond the implied ordering.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_top_protocols_statsCInspect

[DISCOVERY] Returns the representative stats for the top protocols — Returns the representative stats for the top protocols

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It fails to mention what criteria defines 'top' protocols, what specific statistics are returned, time-window constraints, caching behavior, or the response structure. No safety or side-effect information is included.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is inefficiently structured with near-tautological repetition: 'Returns the representative stats for the top protocols — Returns the representative stats for the top protocols'. The em-dash adds no value and wastes tokens that could have defined 'representative' or 'top'.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema provided, the description should explain what data structure or fields are returned. It omits this entirely, leaving the agent unaware of what 'stats' entails. The '[DISCOVERY]' prefix appears to be metadata but is unexplained.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage. Per the baseline rule for zero-parameter tools, this dimension scores a 4. There are no parameters requiring semantic clarification in the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool returns 'representative stats for the top protocols', identifying the resource (protocols) and action (returns stats). However, 'representative' is vague and undefined, and it fails to differentiate from sibling 'top' tools like `get_top_assets_by_volume` or `top_symbols_by_volume`.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No usage guidance is provided. The description does not indicate when to use this tool versus the numerous sibling 'top' tools (e.g., `get_top_chain_pairs_by_num_transfers`, `x_chain_activity_tops`), nor does it mention prerequisites or filtering capabilities.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_transaction_by_idCInspect

Find VAA metadata by ID. — Find VAA metadata by ID. Returns: { emitterAddress: string, emitterChain: number, emitterNativeAddress: string, ... }.

ParametersJSON Schema
NameRequiredDescriptionDefault
seqYessequence of the VAA
emitterYesaddress of the emitter
chain_idYesid of the blockchain
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. Partially documents return value structure (emitterAddress, emitterChain, etc.) but truncates with '...'. Does not mention error conditions, rate limits, or whether this is a cached lookup vs real-time query.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Suffers from verbatim repetition: 'Find VAA metadata by ID. — Find VAA metadata by ID.' The return value documentation is appended as a fragmented object literal rather than natural language, reducing readability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Tool has simple 3-parameter schema and no output schema. Description provides partial return documentation but cuts off mid-object. Adequate for a lookup tool but lacks context on parameter relationships (e.g., that chain_id/emitter/seq form a composite key).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage (chain_id, emitter, seq all documented). Description adds no additional parameter semantics, meeting baseline of 3 for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

States the action (Find) and resource (VAA metadata), but repeats the same phrase twice with an em-dash. Tool name suggests 'transaction' while description describes 'VAA metadata', creating ambiguity given siblings like find_vaa_by_id and find_global_transaction_by_id exist. No distinction made between these related concepts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus siblings such as find_vaa_by_id, find_global_transaction_by_id, or get_last_transactions. Missing prerequisites or conditions for use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_vaa_countsAInspect

[DISCOVERY] Returns the total number of VAAs emitted for each blockchain. — Returns the total number of VAAs emitted for each blockchain. Returns: { data: { chainId: number, count: number }[], pagination: { next: string } }.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It documents the complete output structure including pagination (next cursor) and data shape (chainId/count pairs), which is critical behavioral context given the lack of output schema. It does not mention rate limits or error behaviors, but the return contract is fully specified.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description repeats the core sentence ('Returns the total number...') verbatim before and after the em-dash, creating redundancy. While the return schema documentation is valuable, the repetition means not every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter read-only discovery endpoint, the description is sufficiently complete. It compensates for the missing output schema by detailing the exact JSON structure of the response, including pagination mechanics, fulfilling the completeness requirement for this complexity level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters, which per evaluation rules establishes a baseline score of 4. The description does not need to compensate for missing parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Returns the total number of VAAs emitted for each blockchain,' specifying the verb (returns), resource (VAA counts), and scope (per blockchain). This effectively distinguishes it from sibling tools like 'find_vaas_by_chain' which return individual VAA records rather than aggregated counts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The '[DISCOVERY]' tag implies a metadata/statistical use case, providing implicit context for when to use this over query tools like 'find_all_vaas'. However, it lacks explicit when-to-use guidance or contrasts with specific sibling alternatives that return individual records versus aggregated statistics.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_versionBInspect

Get version/release information. — Get version/release information. Returns: { build: string, build_date: string, version: string }.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description carries full disclosure burden. It documents the return structure (build, build_date, version) which is essential given the absence of a structured output schema, but omits other behavioral traits like caching, authentication requirements, or rate limiting.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Suffers from severe redundancy: the phrase 'Get version/release information' is repeated verbatim before and after the em-dash, wasting tokens and attention. The return value documentation is useful but should be formatted more cleanly without repetition.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter metadata tool, the description is reasonably complete. It compensates for the missing output schema by documenting the return object structure inline, though it could benefit from mentioning whether authentication is required.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema contains zero parameters, establishing a baseline score of 4 per evaluation rules. No additional parameter semantics are required or provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb (Get) and resource (version/release information) identifying this as a metadata endpoint. Lacks explicit differentiation from diagnostic siblings like health_check or ready_check, and contains tautological repetition that slightly weakens clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to invoke this tool versus alternatives like health_check or ready_check, nor any mention of typical use cases such as client compatibility verification or deployment validation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

governor_available_notional_by_chainCInspect

Get available notional by chainID Since from the wormhole-explorer point of view it is not a node, but has the information of all nodes, in order to build the endpoints it was assumed: There are N number of remainingAvailableNotional values in the GovernorConfig collection. N = number of guardians for a chainID. The smallest remainingAvailableNotional value for a chainID is used for the endpoint response. — Get available notional by chainID Since from the wormhole-explorer point of view it is not a node, but has the information of all nodes, in order to build the endpoints it was assumed: Ther

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and succeeds in explaining the aggregation logic: it clarifies that the value comes from the GovernorConfig collection, that N guardians each have values, and specifically that the smallest remainingAvailableNotional is returned. This is substantial behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness1/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Critical structural failure: the entire description is duplicated, with the second instance cut off mid-word ('Ther'). This represents approximately 100% redundancy plus truncation errors, indicating a severe editing or data ingestion problem rather than intentional conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having no output schema, the description fails to characterize the return value (e.g., is it a number, an object, what units?). The duplication error and abrupt cutoff further suggest the definition is incomplete or corrupted. It explains the calculation method but not the conceptual meaning of 'available notional' in the Wormhole governance context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 0 parameters (empty object), and the description correctly implies no parameters are needed. With 100% schema coverage of zero parameters, the baseline score of 4 applies appropriately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

States 'Get available notional by chainID' which provides verb and resource, but the domain-specific term 'notional' lacks context for general use. The text is duplicated and truncated in the middle of the second repetition ('Ther'), creating confusion about whether this is intentional or an error.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this versus siblings like 'governor_notional_available_by_chain' or 'governor_max_notional_available_by_chain'. The explanation focuses entirely on implementation details (how the smallest value is selected) rather than use cases or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

governor_configCInspect

Returns governor configuration for all guardians. — Returns governor configuration for all guardians. Returns: { data: { chains: { bigTransactionSize: number, chainId: unknown, notionalLimit: number }[], counter: number, createdAt: string, id: string, nodeName: string, tokens: { originAddress: string, originChainId: number, price: number }[], ... }, pagination: { next: string } }.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number.
pageSizeNoNumber of elements per page.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, description carries full burden. It discloses return structure via embedded JSON (compensating for missing output schema) and reveals pagination behavior through the 'next' field. However, lacks information on rate limits, auth requirements, or data freshness.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Contains wasteful repetition: 'Returns governor configuration for all guardians. — Returns governor configuration for all guardians.' The em-dash concatenation appears accidental. Return structure JSON is verbose but necessary given missing output schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, so description compensates by embedding return structure showing data types and pagination fields. Adequate for invocation but lacks domain context about what 'governor' or 'guardians' represent in this system.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions ('Page number', 'Number of elements per page'). Tool description adds no parameter-specific context, but baseline of 3 is appropriate since schema already documents semantics adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it 'Returns governor configuration for all guardians' with specific verb and resource. Distinguishes from sibling 'governor_config_by_guardian_address' by emphasizing 'all guardians' vs specific address lookup.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus the sibling 'governor_config_by_guardian_address' or other governor-related endpoints. No prerequisites or filtering guidance mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

governor_config_by_guardian_addressCInspect

Returns governor configuration for a given guardian. — Returns governor configuration for a given guardian. Returns: { data: { chains: { bigTransactionSize: number, chainId: unknown, notionalLimit: number }[], counter: number, createdAt: string, id: string, nodeName: string, tokens: { originAddress: string, originChainId: number, price: number }[], ... }, pagination: { next: string } }.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It provides detailed return structure (JSON schema) which compensates for the missing output_schema, but does not clarify how the guardian address is specified given the empty input schema, nor does it mention safety, authentication requirements, or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description contains obvious redundancy: 'Returns governor configuration for a given guardian.' appears twice identically with an em-dash separator. This wastes tokens without adding value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complex nested return structure (chains, tokens, pagination) and lack of output schema, the description provides the return type definition inline, which is helpful. However, it lacks domain context explaining what a 'guardian' represents or how pagination should be utilized.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters, which establishes a baseline score of 4 per the evaluation rules. The description correctly implies no user-provided parameters are needed, though it does not explain how the guardian address is derived (likely from context/path).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states what the tool returns (governor configuration for a guardian), but the sentence is duplicated wastefully. It fails to differentiate from the sibling tool `governor_config` (which likely returns global configuration), leaving ambiguity about when to use this specific variant.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus `governor_config` or `governor_status_by_guardian_address`. No mention of prerequisites for specifying the guardian address or return value handling.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

governor_enqueued_vaasBInspect

Returns enqueued VAAs for each blockchain. — Returns enqueued VAAs for each blockchain. Returns: { data: { chainId: number, enqueuedVaas: unknown[] }[], pagination: { next: string } }.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number.
pageSizeNoNumber of elements per page.
sortOrderNoSort results in ascending or descending order.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Since no annotations are provided and no output schema exists, the description carries the full burden of behavioral disclosure. It successfully documents the return structure including the nested data array with chainId objects and the pagination cursor (next). This is vital information that would otherwise be opaque.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description contains significant redundancy: 'Returns enqueued VAAs for each blockchain.' is repeated verbatim before the em-dash separator. While the return value documentation is valuable, the exact duplication of the first sentence is wasteful and suggests poor editing or auto-generation errors.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description adequately compensates by detailing the response structure including the data array shape and pagination object. With three optional parameters fully documented in the schema, the description provides sufficient context for a query endpoint, though it could explain what 'enqueued' status means for VAAs.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage (page, pageSize, sortOrder are all clearly documented in the schema). The description adds no additional semantic information about these parameters, meeting the baseline expectation for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns enqueued VAAs (Verified Action Approvals) grouped by blockchain. It identifies the resource (enqueued VAAs) and scope (each blockchain). However, it does not differentiate from the sibling tool 'guardians_enqueued_vaas', which is important given the distinct roles of governors and guardians in Wormhole's architecture.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

There is no guidance on when to use this tool versus the sibling 'guardians_enqueued_vaas' or 'guardians_enqueued_vaas_by_chain'. It does not indicate that all parameters are optional or suggest default pagination behavior. No prerequisites or exclusion criteria are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

governor_max_notional_available_by_chainCInspect

Returns the maximum amount of notional value available for a given blockchain. — Returns the maximum amount of notional value available for a given blockchain. Returns: { data: { availableNotional: number, chainId: number, createdAt: string, emitters: { emitterAddress: string, enqueuedVaas: unknown[], totalEnqueuedVaas: number }[], id: string, nodeName: string, ... }, pagination: { next: string } }.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided. Description compensates partially by documenting the return value structure with field types (availableNotional, emitters, pagination), but omits explanation of what 'notional value' represents, cache behavior, or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Contains verbatim repetition of the opening sentence ('Returns the maximum amount...') separated by an em-dash, followed by a dense JSON blob, indicating poor editing and inefficient structure.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Includes detailed return schema structure which compensates for the lack of structured output schema, but lacks domain context about the governor system or explanation of the business meaning behind fields like enqueuedVaas.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema contains zero parameters; baseline score applies as no parameter documentation is required.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

States it returns 'maximum amount of notional value available for a given blockchain' but contains verbatim repetition and fails to clarify the distinction between 'max notional' and 'notional available' in similarly named sibling tools like governor_notional_available_by_chain.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus siblings (e.g., governor_notional_available_by_chain, governor_available_notional_by_chain) or what 'maximum' signifies in this context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

governor_notional_availableCInspect

Returns the amount of notional value available for each blockchain. — Returns the amount of notional value available for each blockchain. Returns: { data: { availableNotional: number, chainId: number }[], pagination: { next: string } }.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number.
pageSizeNoNumber of elements per page.
sortOrderNoSort results in ascending or descending order.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It documents the return structure inline (including pagination fields), which is helpful since no output schema exists. However, it omits other behavioral traits like read-only safety, rate limits, or whether the data is real-time versus cached.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description suffers from poor structure: it verbatim repeats the same sentence, creating redundancy. Additionally, embedding raw JSON return types in the description text is messy and hard to parse; this information belongs in a structured output schema rather than inline text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with three optional pagination parameters and no output schema, the description adequately documents the return value structure. However, given the complex domain (governance/notional limits) and numerous similar sibling tools, it lacks domain context explaining what 'notional value' represents in this system.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for all three parameters (page, pageSize, sortOrder). The description adds no additional semantic context about these parameters, but given the high schema coverage, the baseline score of 3 is appropriate as no compensation is needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns notional value available for each blockchain, providing specific verb and resource. However, it redundantly repeats the same sentence ('Returns... — Returns...'), and fails to distinguish from the sibling tool `governor_notional_available_by_chain`, which likely returns data for a specific chain rather than all chains.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus the similar sibling tools like `governor_notional_available_by_chain` or `governor_available_notional_by_chain`. There are no stated prerequisites, filtering recommendations, or alternatives mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

governor_notional_available_by_chainCInspect

Returns the amount of notional value available for a given blockchain. — Returns the amount of notional value available for a given blockchain. Returns: { data: { availableNotional: number, chainId: number, createdAt: string, id: string, nodeName: string, updatedAt: string }[], pagination: { next: string } }.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number.
pageSizeNoNumber of elements per page.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Since no annotations are provided, the description carries the full burden. It partially compensates by embedding the complete output structure (data array with specific fields and pagination object), which helps the agent understand the return format. However, it lacks disclosure of rate limits, authentication requirements, error behaviors, or clarification on the 'by_chain' aggregation logic.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description redundantly repeats the same sentence twice ('Returns the amount... blockchain. — Returns the amount... blockchain'), which is wasteful and confusing rather than concise. While the output schema embedding is structurally useful, the duplication significantly degrades the overall structure.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of a formal output schema, the inline documentation of the return structure provides minimal viable completeness. However, the description leaves critical gaps regarding the chain-filtering behavior (given the name implies 'by_chain' but inputs don't support it) and lacks any usage context for the pagination parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for its two parameters ('Page number' and 'Number of elements per page'), establishing a baseline of 3. The description adds no additional semantic context about these pagination parameters or their relationship to the returned data set.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Returns') and resource ('notional value available'), but contains a significant inaccuracy: it claims the tool returns data 'for a given blockchain' implying single-chain filtering, yet the input schema lacks a chainId parameter and the output schema shows an array of records each containing a chainId, suggesting it returns a list for all chains. It also fails to differentiate from near-identical siblings like 'governor_available_notional_by_chain'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus the numerous similar siblings (e.g., 'governor_notional_available', 'governor_max_notional_available_by_chain'). There are no prerequisites mentioned, no explanation of when pagination is required, and no exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

governor_notional_limitBInspect

Returns the governor limit for all blockchains. — Returns the governor limit for all blockchains. Returns: { data: { availableNotional: number, chainId: number, maxTransactionSize: number, notionalLimit: number }[], pagination: { next: string } }.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number.
pageSizeNoNumber of elements per page.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It compensates effectively by disclosing the return structure inline: an array containing availableNotional, chainId, maxTransactionSize, and notionalLimit, plus pagination with a 'next' field. This reveals the data shape and pagination behavior that would typically appear in an output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description redundantly repeats the same sentence twice: 'Returns the governor limit for all blockchains. — Returns the governor limit for all blockchains.' This is wasteful. While the return type documentation is valuable, the repetition significantly harms conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that no output schema exists, the description adequately compensates by documenting the return fields (availableNotional, chainId, maxTransactionSize, notionalLimit) and pagination structure. For a list endpoint with optional pagination parameters, this provides sufficient context for invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for both parameters ('Page number' and 'Number of elements per page'). The description adds no additional parameter semantics, meeting the baseline expectation of 3 when the schema is fully self-documenting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool 'Returns the governor limit for all blockchains,' providing a specific verb (Returns), resource (governor limit), and scope (all blockchains). This scope distinguishes it from sibling 'by_chain' variants, though it doesn't clarify how it differs from 'governor_notional_limit_detail' or 'governor_notional_available'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus the numerous sibling governor endpoints (e.g., governor_notional_limit_detail, governor_notional_available, governor_max_notional_available_by_chain). There are no prerequisites, filtering recommendations, or exclusions stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

governor_notional_limit_detailCInspect

Returns the detailed notional limit for all blockchains. — Returns the detailed notional limit for all blockchains. Returns: { data: { chainId: number, createdAt: string, id: string, maxTransactionSize: number, nodeName: string, notionalLimit: number, ... }[], pagination: { next: string } }.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number.
pageSizeNoNumber of elements per page.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Since no annotations are provided, the description carries the full burden. It compensates by including the complete JSON return structure (data array with fields like maxTransactionSize, pagination object), which clarifies the response shape. However, it omits safety characteristics (idempotency, rate limits) and auth requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description contains tautological repetition: 'Returns the detailed notional limit for all blockchains. — Returns the detailed notional limit for all blockchains.' This wastes space without adding value. While the inline return schema is useful, the structure is inefficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the inline documentation of the return structure provides necessary completeness. However, the description lacks domain context (what constitutes a 'notional limit' in the governor system) and operational details (how to traverse pagination using the returned 'next' cursor).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for both 'page' and 'pageSize', the schema already fully documents the parameters. The description adds no additional semantic context (e.g., default page sizes, max limits, or that these are optional), warranting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states it returns 'detailed notional limit for all blockchains', which identifies the resource and scope, but fails to distinguish from the sibling tool 'governor_notional_limit' (without '_detail'). The word 'detailed' is used without explaining what additional fields or granularity it provides compared to the base version.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus 'governor_notional_limit_detail_by_chain' or the base 'governor_notional_limit'. There are no instructions on pagination strategy or how to handle the 'next' token returned in the response.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

governor_notional_limit_detail_by_chainCInspect

Returns the detailed notional limit available for a given blockchain. — Returns the detailed notional limit available for a given blockchain. Returns: { data: { chainId: number, createdAt: string, id: string, maxTransactionSize: number, nodeName: string, notionalLimit: number, ... }[], pagination: { next: string } }.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number.
pageSizeNoNumber of elements per page.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Since no annotations are provided, the description carries the full burden. It compensates partially by including the JSON return structure (showing data array with chainId, pagination, etc.), which substitutes for the missing output schema. However, it fails to explain the chain filtering mechanism, rate limits, or why the 'by_chain' tool lacks a chain parameter.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description repeats itself verbatim ('Returns... — Returns...'), which is wasteful. While including the return JSON structure is valuable given the lack of output schema, the repetition and cramming of return value documentation into the description field creates poor readability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Critical gap exists between the tool name ('_by_chain') and the actual input parameters (pagination only). The description fails to explain how chain filtering works or whether this endpoint returns all chains. Without annotations or output schema, the description should have clarified this architectural mismatch.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage (page and pageSize both documented), the description meets the baseline expectation. It does not add additional semantic context about parameter usage (e.g., default page sizes, max values) beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states it returns 'detailed notional limit' resources, but creates confusion by claiming it works 'for a given blockchain' while the input schema provides no chain identifier parameter (only pagination). This contradicts the '_by_chain' suffix in the tool name, leaving it unclear whether this returns one chain or many.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this versus siblings like 'governor_notional_limit_detail' (without _by_chain), 'governor_notional_limit', or 'governor_notional_available_by_chain'. No prerequisites or exclusions are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

governor_statusAInspect

Returns the governor status for all guardians. — Returns the governor status for all guardians. Returns: { data: { chains: unknown[], createdAt: string, id: string, nodeName: string, updatedAt: string }[], pagination: { next: string } }.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number.
pageSizeNoNumber of elements per page.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It compensates effectively by documenting the complete return structure including the data array fields (chains, nodeName, timestamps) and pagination object with 'next' cursor, which is crucial for the agent to handle the response correctly.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description contains redundant repetition ('Returns the governor status for all guardians. — Returns the governor status for all guardians'), which violates the 'every sentence earns its place' principle. However, the subsequent return type documentation is valuable and efficiently structured, preventing a lower score.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of structured output schema, the description provides essential contextual completeness by detailing the return object shape, including nested data properties and pagination. For a read-only listing tool with two well-documented parameters, this is sufficient for correct invocation and response handling.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for both parameters (page and pageSize). The description text adds no additional parameter semantics, but per rubric guidelines, the baseline score is 3 when schema coverage is high, as the schema already fully documents the parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action (returns) and resource (governor status) with explicit scope (for all guardians). This effectively distinguishes it from the sibling tool 'governor_status_by_guardian_address', indicating this tool is for bulk retrieval rather than single-guardian queries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying 'all guardians', suggesting when to use this versus the single-guardian variant. However, it lacks explicit guidance on when not to use it, prerequisites, or direct comparison to sibling alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

governor_status_by_guardian_addressCInspect

Returns the governor status for a given guardian. — Returns the governor status for a given guardian. Returns: { data: { chains: { chainId: unknown, emitters: unknown[], remainingAvailableNotional: number }[], createdAt: string, id: string, nodeName: string, updatedAt: string }, pagination: { next: string } }.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number.
pageSizeNoNumber of elements per page.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It partially compensates by disclosing the detailed JSON return structure inline (including chains, emitters, and pagination), which adds transparency about what the tool produces. However, it omits other behavioral traits like caching, authentication requirements, or whether the data is real-time.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description suffers from immediate repetition ('Returns the governor status for a given guardian. — Returns the governor status for a given guardian.'). The inclusion of a raw JSON blob for return values makes the text verbose and poorly structured, rather than using a formal output schema field.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of governor status data and the lack of annotations or formal output schema, the description is incomplete. It documents the return structure but fails to explain how to provide the guardian address (implied by the tool name), what 'remainingAvailableNotional' represents, or how pagination behaves when traversing results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While the schema has 100% description coverage for the two pagination parameters (page, pageSize), the description fails to address the critical discrepancy between the tool name (implying a 'guardian_address' parameter) and the actual input schema which lacks this parameter. It mentions 'for a given guardian' without explaining how that guardian is specified.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool returns 'governor status for a given guardian,' identifying the resource and action, but repeats this phrase verbatim twice. It fails to clearly distinguish this tool from the sibling `governor_status` tool, leaving ambiguity about why to use the '_by_guardian_address' variant specifically.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like `governor_status` or `governor_config_by_guardian_address`. There are no prerequisites mentioned, such as how to obtain a guardian address, and no exclusions or error conditions described.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

governor_vaasCInspect

Returns all vaas in Governor. — Returns all vaas in Governor. Returns: { data: { amount: number, chainId: number, emitterAddress: string, releaseTime: string, sequence: string, status: string, ... }[], pagination: { next: string } }.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the response structure inline (data array with specific fields and pagination object), which compensates somewhat for the lack of output schema. However, it omits behavioral details like rate limits, auth requirements, and semantic meaning of fields like 'status' or 'releaseTime'.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Suffers from blatant repetition: 'Returns all vaas in Governor. — Returns all vaas in Governor.' This tautology wastes tokens without adding value. While the return type information is necessary given the lack of output schema, the redundant opening sentences indicate poor editing.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Inadequate for a complex blockchain governance API domain. Lacks explanation of what 'Governor' refers to in this context, how it relates to the broader VAA lifecycle, or why a user would choose this over the 40+ sibling tools. The inline return schema helps, but domain context is missing.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema contains zero parameters, establishing a baseline of 4. The description correctly does not invent parameters, maintaining alignment with the empty schema. No additional parameter context is needed or provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

States the basic action (Returns) and resource (vaas in Governor), but fails to distinguish from the sibling tool 'find_all_vaas', which creates ambiguity about why two 'get all' endpoints exist. The scope difference (Governor-specific vs general) is implied but not explicit.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives like 'find_all_vaas', 'governor_enqueued_vaas', or 'find_vaas_by_chain'. No mention of prerequisites, filtering capabilities, or pagination usage patterns.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

guardians_enqueued_vaasBInspect

Get enqueued VAAs — Get enqueued VAAs Returns: { entries: { emitterAddress: string, emitterChain: number, notionalValue: string, releaseTime: number, sequence: number, txHash: string }[] }.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full disclosure burden. It compensates by documenting the return structure (entries array with specific fields), but omits critical behavioral context like pagination limits, data freshness, or what 'enqueued' implies about the VAA state.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is brief but structurally wasteful due to immediate repetition of the action phrase ('Get enqueued VAAs — Get enqueued VAAs'). While front-loaded, the redundant opening reduces clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

In lieu of an output schema, the description documents the return object structure, which is minimally sufficient. However, it lacks domain context (what constitutes an 'enqueued' VAA) and differentiation from related tools in the extensive sibling list.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema contains zero parameters. Per evaluation rules, zero-parameter tools receive a baseline score of 4. The description does not need to compensate for missing parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool retrieves 'enqueued VAAs' but opens with a tautology ('Get enqueued VAAs — Get enqueued VAAs'). It fails to distinguish from siblings like `guardians_enqueued_vaas_by_chain` or `governor_enqueued_vaas`, leaving scope ambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus the chain-specific variant (`guardians_enqueued_vaas_by_chain`) or the governor equivalent (`governor_enqueued_vaas`). No prerequisites or filtering advice is included.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

guardians_enqueued_vaas_by_chainCInspect

Returns all enqueued VAAs for a given blockchain. — Returns all enqueued VAAs for a given blockchain. Returns: { data: { chainId: number, emitterAddress: string, notionalValue: number, releaseTime: number, sequence: string, txHash: string }[], pagination: { next: string } }.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number.
pageSizeNoNumber of elements per page.
sortOrderNoSort results in ascending or descending order.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It partially redeems itself by documenting the complete return structure (data array and pagination object) since no output schema exists. However, it fails to explain what 'enqueued' means, what VAAs are, or operational constraints like rate limiting.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description contains obvious repetition ('Returns all enqueued VAAs for a given blockchain. — Returns all...') and awkward punctuation (em dash). The inline JSON return structure is useful but poorly formatted. Every sentence does not earn its place; one is entirely redundant.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complex domain (Wormhole guardians, VAAs, enqueued messages) and lack of annotations, the description inadequately contextualizes the operation. It documents return values (good) but omits critical usage context: how to specify the chain, how pagination tokens work, and what distinguishes this from sibling query tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While schema description coverage is 100% (baseline 3), the description deducts value by implying a chain parameter exists ('for a given blockchain') when the schema only contains pagination controls (page, pageSize, sortOrder). This creates parameter confusion without adding semantic clarity beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the core purpose (returning enqueued VAAs) and resource (blockchain-specific), but suffers from tautological repetition ('Returns... — Returns...'). Crucially, the description implies filtering by blockchain, yet the input schema contains no chain parameter, creating ambiguity about how the chain is specified.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus close siblings like 'guardians_enqueued_vaas' or 'governor_enqueued_vaas'. The description does not clarify prerequisites, required context, or selection criteria despite the crowded namespace of similar VAA-related tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

guardian_setCInspect

Get current guardian set. — Get current guardian set. Returns: { guardianSet: { addresses: string[], index: number } }.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It documents the exact return structure ('Returns: { guardianSet: { addresses: string[], index: number } }'), which is valuable since no output_schema exists. However, it omits other behavioral details like rate limiting, caching, or whether this reflects real-time vs. finalized state.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description repeats 'Get current guardian set.' twice consecutively with only an em-dash separator, indicating template error or sloppy editing. This redundancy consumes tokens without adding value. The return value documentation is useful but awkwardly appended.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a zero-parameter read operation. The inline return structure documentation partially compensates for the missing output_schema. However, given the domain complexity (evident from numerous sibling tools), it should explain what constitutes a 'guardian set' and why an agent would need the index/addresses.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema contains zero parameters. Per evaluation rules, 0 params warrants a baseline score of 4. The description does not need to compensate for missing parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

States the specific action ('Get') and resource ('current guardian set'), but repeats the sentence verbatim with an em-dash, wasting space. It implicitly distinguishes from sibling 'guardians_*' tools (plural) by focusing on the set configuration, but lacks domain context about what a guardian set represents in the Wormhole ecosystem.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this versus related guardian tools like 'guardians_hearbeats' or 'governor_config_by_guardian_address'. No prerequisites, filtering options, or alternative tools are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

guardians_find_signed_batch_vaaCInspect

get a batch of VAA []byte from a chainID, emitter address and sequence. — get a batch of VAA []byte from a chainID, emitter address and sequence. Returns: { vaaBytes: number[] }.

ParametersJSON Schema
NameRequiredDescriptionDefault
seqYessequence of the VAA
emitterYesaddress of the emitter
chain_idYesid of the blockchain
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description documents the return structure ('Returns: { vaaBytes: number[] }'), which adds necessary context since no formal output schema exists. However, it omits other behavioral details such as error conditions, pagination behavior for batches, or whether this is a read-only operation (though implied by 'find').

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description contains significant redundancy—the exact same phrase appears twice separated by an em-dash. This appears to be a copy-paste error rather than intentional structure. The return value documentation is appended as a separate clause, creating disjointed flow.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations and output schema, the description partially compensates by documenting the return type. However, for a specialized blockchain tool (dealing with VAAs, guardians, and batch retrieval), it lacks domain context explaining what a VAA is, what 'signed' implies, or how batching works.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions for chain_id, emitter, and seq. The description mentions these parameters in natural language ('chainID, emitter address and sequence'), confirming the mapping, but does not add format specifications, example values, or semantic constraints beyond what the schema already provides. Baseline 3 is appropriate given the complete schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the basic purpose (getting a batch of VAA bytes) and identifies the three key inputs, but suffers from nearly identical repetition (the sentence appears twice). It fails to differentiate 'batch' retrieval from the sibling tool 'guardians_find_signed_vaa' (singular), leaving ambiguity about when to use this specific endpoint.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like 'guardians_find_signed_vaa' or 'find_vaa_by_id'. There are no prerequisites mentioned, no explanation of what constitutes a 'batch' versus a single VAA, and no conditions for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

guardians_find_signed_vaaCInspect

get a VAA []byte from a chainID, emitter address and sequence. — get a VAA []byte from a chainID, emitter address and sequence. Returns: { vaaBytes: number[] }.

ParametersJSON Schema
NameRequiredDescriptionDefault
seqYessequence of the VAA
emitterYesaddress of the emitter
chain_idYesid of the blockchain
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses the return format ({ vaaBytes: number[] }), which is necessary since no output schema exists. However, it omits error behavior (e.g., VAA not found), authentication requirements, and rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Suffers from severe redundancy: the exact same sentence is repeated twice with only an em-dash between them. The return value documentation is useful but awkwardly appended. Every sentence does NOT earn its place due to the repetition.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers the essential input parameters and return type for a retrieval operation, which satisfies minimum requirements given the simple schema. However, it lacks domain context (what is a VAA?) and differentiation from the numerous sibling VAA lookup tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (all three parameters have descriptions), establishing baseline 3. The description lists the parameters but adds no additional semantic context (formats, examples, constraints) beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

States the specific action (get VAA []byte) and required identifiers (chainID, emitter, sequence), but fails to distinguish from siblings like `find_vaa_by_id` or `guardians_find_signed_batch_vaa`. Contains redundant repetition of the same sentence.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives such as `find_vaas_by_emitter` or `guardians_find_signed_batch_vaa`. No mention of prerequisites or error conditions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

guardians_hearbeatsCInspect

Get heartbeats for guardians — Get heartbeats for guardians Returns: { entries: { p2pNodeAddr: string, rawHeartbeat: { bootTimestamp: unknown, counter: unknown, features: unknown, guardianAddr: unknown, networks: unknown, nodeName: unknown, ... }, verifiedGuardianAddr: string }[] }.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so description carries full burden. It compensates partially by embedding the return structure (entries array with p2pNodeAddr, rawHeartbeat object, verifiedGuardianAddr) since no output schema exists. However, it omits auth requirements, rate limits, caching behavior, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Suffers from immediate redundancy ('Get heartbeats for guardians — Get heartbeats for guardians'). The JSON return structure dump, while informative, is awkwardly inlined rather than being in a structured output schema field, hurting readability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter tool, the description partially compensates for the missing output schema by documenting the return structure. However, it lacks domain context explaining what guardian heartbeats signify (e.g., liveness signals, boot timestamps, network participation) that would help an agent understand when to query this endpoint.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 0 parameters with 100% coverage trivially satisfied. Baseline score applies as there are no parameters requiring semantic clarification in the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the verb 'Get' and resource 'heartbeats for guardians' but commits tautology by repeating this phrase twice. It fails to distinguish from sibling guardian tools (e.g., guardian_set, guardians_enqueued_vaas) or explain what 'heartbeats' represent in this domain (node health/status).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives like guardian_set or find_observations_by_guardian. No prerequisites, filtering options, or conditions are mentioned despite the large number of related guardian query tools available.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

guardians_is_vaa_enqueuedCInspect

Check if vaa is enqueued — Check if vaa is enqueued Returns: { entries: { emitterAddress: string, emitterChain: number, notionalValue: string, releaseTime: number, sequence: number, txHash: string }[] }.

ParametersJSON Schema
NameRequiredDescriptionDefault
seqYessequence of the vaa
emitterYesaddress of the emitter
chain_idYesid of the blockchain
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and explicitly documents the return structure: an array of entries with emitterAddress, emitterChain, notionalValue, releaseTime, sequence, and txHash fields. This compensates for the lack of output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Contains wasteful duplication: 'Check if vaa is enqueued — Check if vaa is enqueued'. This repetition suggests poor editing rather than intentional emphasis.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Provides return type structure which helps, but given the complex domain (Wormhole guardian network), it lacks explanation of what 'enqueued' means or what a VAA (Verified Action Approval) represents.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (chain_id, emitter, seq all documented), establishing baseline. The description adds no parameter context, focusing entirely on the return value instead.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the specific action ('Check if vaa is enqueued') and resource, but repeats the same phrase twice (wasteful). It fails to differentiate from siblings like `guardians_enqueued_vaas` (which lists all) versus this tool which checks a specific VAA by ID.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this versus sibling tools `guardians_enqueued_vaas` or `governor_enqueued_vaas`, or prerequisites like needing specific chain_id/emitter/seq values.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

guardians_token_listBInspect

[DISCOVERY] Get token list Since from the wormhole-explorer point of view it is not a node, but has the information of all nodes, in order to build the endpoints it was assumed: For tokens with the same originChainId and originAddress and different price values for each node, the price that has most occurrences in all the nodes for an originChainId and originAddress is returned. — Get token list Since from the wormhole-explorer point of view it is not a node, but has the information of all nodes, in order to build the endpoints it was assumed: For tokens with the same originChainId and originAddress and d

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and successfully discloses critical behavioral logic: when nodes report conflicting prices, the value with most occurrences across nodes is returned. This is specific behavioral context not inferable from the name. Lacks auth/rate limit details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Suffers from significant repetition—the text appears duplicated and truncated ('...and d' at the end). The single complex sentence explaining the price aggregation algorithm, while informative, is run-on and could be structured more clearly. The duplication indicates a clear error that wastes tokens.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Explains the price aggregation algorithm well but lacks description of the return structure (fields, format) despite having no output schema to reference. For a discovery endpoint returning token lists, should describe what constitutes a token entry beyond just the price resolution logic.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema contains zero parameters, which per evaluation rules establishes a baseline score of 4. The description does not need to compensate for missing parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

States 'Get token list' which identifies the core resource and action, and mentions the wormhole-explorer aggregation context which distinguishes it from single-node queries. However, the description is garbled by apparent duplication and buries the purpose in verbose implementation details, reducing clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The '[DISCOVERY]' prefix hints at categorization but provides no explicit guidance on when to use this versus sibling token tools like 'get_token_by_chain_and_address' or 'api_v1_native_token_transfer_token_list'. No prerequisites or exclusions are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

health_checkCInspect

Health check — Health check Returns: { status: string }.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden but only minimally discloses the return structure ('{ status: string }'). It fails to specify possible status values, what subsystems are checked (DB, external APIs), or whether the operation is safe/idempotent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief, the text wastes space with redundant repetition ('Health check — Health check'). The em-dash separation is awkward and the structure does not follow standard conventions (no clear verb-object construction).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter tool without output schema, the description minimally suffices by mentioning the return type. However, it lacks critical context: possible status values, distinction from 'ready_check', and what the health check actually validates (connectivity, service status, etc.).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters, establishing a baseline score of 4. The description correctly implies no inputs are needed by focusing solely on the return value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Health check — Health check' is tautological, merely restating the tool name twice with an em-dash. While it mentions the return type afterwards, the core purpose statement adds no value beyond the name.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus siblings like 'ready_check' or 'get_version', or what constitutes a healthy vs unhealthy state. No prerequisites or conditions mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_transactionsCInspect

[DISCOVERY] Returns transactions. Output is paginated. — Returns transactions. Output is paginated. Returns: { transactions: { emitterAddress: string, emitterChain: number, emitterNativeAddress: string, globalTx: { destinationTx: unknown, id: unknown, originTx: unknown }, id: string, payload: object, ... }[] }.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number. Starts at 0.
addressNoFilter transactions by Address.
pageSizeNoNumber of elements per page.
sortOrderNoSort results in ascending or descending order.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses pagination behavior and embeds the return type structure (detailed JSON shape), which compensates partially for the lack of output schema. However, it omits safety information (read-only status), rate limits, error conditions, and authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description contains obvious repetition: 'Returns transactions. Output is paginated. — Returns transactions. Output is paginated.' It also awkwardly embeds a raw JSON return structure as text. The '[DISCOVERY]' prefix appears to be metadata pollution. Poor information density due to duplication.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complex blockchain domain (evidenced by siblings involving 'vaa', 'emitter', 'guardian'), the description is insufficient. It fails to explain transaction semantics, emitter address formats, or how pagination behaves (cursor vs offset). The embedded return schema helps but lacks field descriptions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage (page, address, pageSize, sortOrder are all documented). Since the schema is self-explanatory, the baseline score applies. The description adds no additional parameter semantics (e.g., address format, sortOrder values), but none are required given the complete schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Returns transactions' is tautological given the tool name 'list_transactions'. It fails to specify what kind of transactions (Wormhole VAAs/bridge transactions based on sibling tools), lacks a specific verb beyond 'returns', and does not differentiate from siblings like 'get_last_transactions', 'get_transaction_by_id', or 'find_global_transaction_by_id'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus the numerous sibling transaction-related tools (e.g., 'get_last_transactions' vs 'find_global_transaction_by_id'). No mention of prerequisites, filtering behavior with the 'address' parameter, or recommended pagination patterns.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

live_tracking_subscribeCInspect

Subscribe to live tracking events for a VAA or transaction — Subscribe to live tracking events for a VAA or transaction Returns: { data: string, error: string, event: "SOURCE_TX" | "GOVERNOR" | "SIGNED_VAA" | "VAA_REDEEMED" | "END_OF_WORKFLOW" | "ERROR" | "SEARCHING", ... }.

ParametersJSON Schema
NameRequiredDescriptionDefault
vaaIDNoVAA ID to track
txHashNoTransaction hash to track
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It usefully enumerates the possible event types ('SOURCE_TX', 'GOVERNOR', etc.) in the Returns clause, but omits critical behavioral context like subscription duration, connection lifecycle, or whether this blocks/streaming.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description contains redundant repetition ('Subscribe... — Subscribe...') which appears to be a formatting error. While the Returns information is valuable, the redundancy and lack of clear separation between purpose and return value hurt structural efficiency.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a subscription mechanism and lack of annotations, the description is minimally adequate. It compensates for missing output schema by listing return fields and event types, but fails to explain the subscription model or parameter selection logic required for successful invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% ('VAA ID to track', 'Transaction hash to track'), establishing a baseline of 3. The description adds no further semantics about parameter relationships (mutual exclusivity) or valid formats, but doesn't need to compensate for schema gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Subscribe') and resources ('VAA or transaction'), distinguishing it from sibling 'find' and 'get' tools which perform one-time queries. However, it doesn't specify what 'live tracking' entails (e.g., WebSocket, SSE, polling).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this subscription tool versus the many available one-time fetch alternatives (e.g., 'find_vaa_by_id', 'get_transaction_by_id'). It also fails to explain that at least one of the two optional parameters (vaaID or txHash) must be provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

parse_vaaCInspect

Parse a VAA. — Parse a VAA. Returns: { parsedPayload: string, standardizedProperties: { amount: string, appIds: string[], fee: string, feeAddress: string, feeChain: number, fromAddress: string, ... } }.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It compensates for the lack of structured output schema by detailing the return object structure (nested standardizedProperties). However, it omits behavioral traits like whether this is a pure function, idempotent, or requires specific permissions to execute.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description opens with redundant repetition ('Parse a VAA. — Parse a VAA.') wasting front-loaded space. While the return value documentation is dense and useful, the redundant opening and lack of input explanation result in poor information density.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having no output schema and no annotations, the description only partially compensates by describing return values. It critically omits how to provide the VAA input (especially puzzling given the empty input schema) and provides no domain context about what a VAA is or when parsing is necessary versus using raw VAAs from find operations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters, which per guidelines establishes a baseline score of 4. The description correctly avoids inventing parameters, though notably it also fails to explain how the VAA input is provided given the empty schema (e.g., request body) which would have been helpful context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool parses a VAA and documents the return structure fields (parsedPayload, standardizedProperties with amount, fee, etc.), which clarifies what 'parse' means. However, it tautologically repeats 'Parse a VAA' at the start and fails to distinguish from sibling retrieval tools like find_vaa_by_id or explain what VAA stands for.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus the numerous sibling VAA retrieval tools (find_vaa_by_id, find_vaas_by_chain, etc.). It does not indicate whether the VAA should be passed as a request body, retrieved first, or if this operates on already-fetched data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ready_checkCInspect

Ready check — Ready check Returns: { ready: string }.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden but only mentions the return structure syntactically ('{ ready: string }') without explaining possible values, semantics, or whether this is a read-only status check.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief, the description is wastefully repetitive (restating 'Ready check') and awkwardly appends return value information with poor formatting ('— Ready check Returns:').

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite being a simple endpoint, the description fails to clarify the distinction from health_check or explain what constitutes 'ready' vs not-ready states in this complex blockchain API ecosystem.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema contains zero parameters, warranting the baseline score of 4 as no parameter documentation is required.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description tautologically restates the tool name ('Ready check — Ready check') without explaining what 'ready' means in this blockchain/Wormhole infrastructure context or what system component is being checked.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this versus the sibling 'health_check' tool, or what conditions would trigger invoking this endpoint.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_operationsCInspect

Find operations by filters. Currently only txHash is supported. — Find operations by filters. Currently only txHash is supported.

ParametersJSON Schema
NameRequiredDescriptionDefault
bodyYeslist of txHash of source tx
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but fails to mention critical traits: whether the operation is read-only (implied but not stated), pagination behavior, what happens if hashes are invalid/not found, or response format. It only repeats the filtering constraint.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Severely flawed structure: the exact same sentence appears twice consecutively with only an em-dash separator. This violates the principle that every sentence must earn its place, wasting context window and suggesting auto-generation or concatenation errors. The core content would fit efficiently in one sentence.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a single-parameter search tool with 100% schema coverage. The description covers the essential limitation (txHash-only), though without an output schema, it could have briefly described what an 'operation' represents. The repetition bug suggests incomplete editorial review.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 100% schema description coverage ('list of txHash of source tx'), the description adds valuable semantic context by clarifying that txHash is the ONLY currently supported filter. This constraint prevents incorrect assumptions about other filter capabilities and compensates for the schema's neutral description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

States the core purpose ('Find operations by filters') and identifies the resource, but is marred by a distracting repetition of the entire sentence. The mention of 'txHash' helps distinguish it from sibling tools like get_operation_by_id, though the duplication suggests an error that reduces clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implicit usage guidance through the constraint 'Currently only txHash is supported,' indicating when to use this tool (when filtering by transaction hash). However, it lacks explicit comparison to alternatives like get_operations or find_observations, and does not state prerequisites or when NOT to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

supply_infoBInspect

Get W token supply data (circulation and total supply). — Get W token supply data (circulation and total supply). Returns: { circulating_supply: string, total_supply: string }.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It adds valuable return type documentation ('Returns: { circulating_supply: string, total_supply: string }') since no output schema exists, but fails to disclose other behavioral traits like idempotency, safety, or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description contains significant waste: the exact sentence 'Get W token supply data (circulation and total supply).' is repeated twice with only an em-dash separator. This redundancy is not front-loaded efficiency; every sentence should earn its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description adequately documents the return structure in the text. With zero parameters and simple read-only semantics (implied by 'Get'), the description provides sufficient context for invocation, though it could explicitly confirm the read-only nature.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters (empty input schema), establishing a baseline of 4. No parameter documentation is required or provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get') and resource ('W token supply data'), specifying both 'circulation and total supply' which distinguishes it from sibling tools like 'circulating_supply' and 'total_supply'. However, the redundant repetition of the first sentence prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus siblings (e.g., 'circulating_supply' or 'total_supply') or alternatives. The description implies you should use this when you need both values, but does not explicitly state when to prefer the individual sibling endpoints.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

swaggerCInspect

Returns the swagger specification for this API. — Returns the swagger specification for this API.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden but only states the return payload type. It omits critical behavioral context: response format (JSON vs YAML), caching behavior, typical response size, or whether this requires authentication.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Severely flawed structure: the exact sentence 'Returns the swagger specification for this API.' is repeated verbatim and joined with an em-dash, creating 100% redundancy without adding value. Every sentence should earn its place; this wastes tokens.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With zero input parameters and no output schema, the description adequately identifies the returned resource. However, given the complexity of the server (60+ sibling endpoints), it should explain the utility—such as using this for schema discovery or to understand available endpoints—rather than merely naming the output.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema contains zero parameters. Per scoring rules, 0 parameters warrants a baseline score of 4 as there are no semantics to explain.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool returns the Swagger API specification using a specific verb and resource. However, it does not explicitly differentiate from the many data-query siblings (e.g., 'find_address_by_id', 'get_operations') by explaining this is for discovery/metadata rather than data retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to invoke this tool versus the data retrieval siblings. The agent is not told whether to call this before other operations, for debugging, or only when the user explicitly asks for API documentation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

top_symbols_by_volumeCInspect

Returns a list of symbols by origin chain and tokens. The volume is calculated using the notional price of the symbol at the day the VAA was emitted. — Returns a list of symbols by origin chain and tokens. The volume is calculated using the notional price of the symbol at the day the VAA was emitted. Returns: { symbols: { symbol: string, tokens: unknown[], txs: number, volume: number }[] }.

ParametersJSON Schema
NameRequiredDescriptionDefault
timeSpanNoTime span, supported values: 7d, 15d and 30d (default is 7d).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully explains the volume calculation methodology (notional price at VAA date), which is valuable business logic. However, it omits safety declarations (read-only status), pagination behavior, and rate limiting context that would help an agent understand operational constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Suffers from severe redundancy: the entire descriptive sentence is duplicated verbatim. The inline JSON return specification, while informative, is awkwardly appended and could be better structured. Every sentence does not earn its place due to the duplication.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter read operation, the description covers the output structure and calculation logic adequately. However, given the lack of structured output schema and the presence of domain-specific terms (VAA, notional price), it should clarify what 'VAA' refers to or acknowledge this is Wormhole-specific terminology.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage (timeSpan with enumerated values), establishing a baseline of 3. The description adds no semantic context for the parameter, nor does it explain what 'timeSpan' means in relation to the volume calculation window (e.g., trailing vs historical). It remains neutral by not contradicting the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

States it returns symbols ranked by volume and mentions the calculation method (notional price at VAA emission). However, it fails to distinguish from sibling tool 'get_top_assets_by_volume' where the distinction between 'symbols' and 'assets' is unclear, and the phrase 'by origin chain' is ambiguous regarding input vs output grouping.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus the similar 'get_top_assets_by_volume' or other ranking tools. No mention of prerequisites, default behaviors when timeSpan is omitted, or data freshness expectations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

total_supplyCInspect

Get W token total supply. — Get W token total supply.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions nothing about response format, caching behavior, potential errors, or whether this reflects real-time or cached data. The description only states the action without behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description literally repeats the same sentence ('Get W token total supply') twice with an em-dash separator. This is not conciseness but unnecessary duplication; every sentence does not earn its place as the second clause adds zero information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of similar sibling tools (circulating_supply, supply_info) and lack of output schema, the description should explain what differentiates total supply from circulating supply and what format to expect. It provides none of this context, leaving agents to guess the distinctions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters. Per evaluation rules, 0 parameters establishes a baseline score of 4. The description neither adds nor subtracts value regarding parameters since none exist.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the specific action (Get) and resource (W token total supply), but suffers from tautological repetition ('— Get W token total supply'), which restates the first clause without adding value. It also fails to distinguish from sibling tools like 'circulating_supply' or 'supply_info'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus similar supply-related siblings (circulating_supply, supply_info). No mention of prerequisites, rate limits, or specific use cases where total supply is preferred over circulating supply.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

x_chain_activityBInspect

Returns a list of chain pairs by origin chain and destination chain. The list could be rendered by notional or transaction count. The volume is calculated using the notional price of the symbol at the day the VAA was emitted. — Returns a list of chain pairs by origin chain and destination chain. The list could be rendered by notional or transaction count. The volume is calculated using the notional price of the symbol at the day the VAA was emitted. Returns: { txs: { chain: number, destinations: unknown[], percentage: number, volume: number }[] }.

ParametersJSON Schema
NameRequiredDescriptionDefault
byNoRenders the results using notional or tx count (default is notional).
appsNoList of apps separated by comma (default is all apps).
timeSpanNoTime span, supported values: 7d, 30d, 90d, 1y and all-time (default is 7d).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description successfully carries the burden of behavioral disclosure by explaining the volume calculation methodology: 'notional price of the symbol at the day the VAA was emitted.' It also includes the complete return type structure inline (`{ txs: { chain: number... } }`), providing necessary context for interpreting the response shape.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description contains exact duplication of the primary explanatory sentence ('Returns a list... VAA was emitted.'), separated by an em-dash, which significantly wastes tokens and indicates poor editing. While the return type annotation at the end is valuable, the redundant repetition violates the principle that every sentence must earn its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite the lack of structured output schema, the description compensates by including the JSON return structure and explaining the domain-specific volume calculation logic (VAA emission timing). For a data retrieval tool with three optional parameters, this provides sufficient context for successful invocation and result interpretation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with all three parameters (`by`, `apps`, `timeSpan`) fully documented in the schema (e.g., 'Renders the results using notional or tx count'). The description references 'notional or transaction count' which aligns with the `by` parameter but adds no additional semantic context or format details beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states it returns 'a list of chain pairs by origin chain and destination chain' and mentions the notional/transaction count rendering options. However, it fails to distinguish from siblings like `x_chain_activity_tops` or `get_top_chain_pairs_by_num_transfers`, leaving ambiguity about whether this returns all pairs, aggregated data, or filtered results.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to select this tool versus the numerous related chain-pair and activity endpoints (e.g., `get_top_chain_pairs_by_num_transfers`). The description only explains functional capabilities without stating selection criteria, prerequisites, or when to prefer notional versus transaction count rendering.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

x_chain_activity_topsCInspect

Search for a specific period of time the number of transactions and the volume. — Search for a specific period of time the number of transactions and the volume.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesTo date, supported format 2006-01-02T15:04:05Z07:00
fromYesFrom date, supported format 2006-01-02T15:04:05Z07:00
appIdNoSearch by appId
timespanYesTime span, supported values: 1d, 1mo and 1y
sourceChainNoSearch by sourceChain
targetChainNoSearch by targetChain
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden but only partially discloses behavior. While it mentions returning transaction counts and volume, it omits whether results are aggregated, what 'tops' rankings refer to, pagination behavior, rate limits, or the response format expected from this cross-chain activity query.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Suffers from severe structural failure: the exact same sentence is repeated twice with only an em-dash separator. This is not conciseness but redundancy, wasting tokens and creating parsing confusion without adding value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema and no annotations, the description remains minimal and fails to explain critical contextual elements like what constitutes a 'top' result, how results are ranked, or what the return payload structure looks like. Inadequate for a 6-parameter aggregation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so parameters are well-documented in the schema itself. The description mentions 'specific period of time' which loosely references the from/to/timespan parameters but adds no semantic detail about the time granularity (1d/1mo/1y) or the purpose of the optional filtering parameters beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

States the tool searches for transaction counts and volume over time, but is grammatically awkward ('Search for a specific period of time the number...'). Critically, it fails to explain what 'tops' means in the context of the tool name or how it differs from the sibling 'x_chain_activity' tool, leaving the scope ambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this versus siblings like 'x_chain_activity' or 'get_top_chain_pairs_by_num_transfers'. No mention of prerequisites, required context, or filtering strategies for the optional parameters (appId, sourceChain, targetChain).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.