Skip to main content
Glama

Server Details

Search and retrieve Avalanche blockchain documentation for building on AVAX.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
Airpote/avalanche-mcp-vscode
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

36 tools
blockchain_get_contract_infoBInspect

Check if an address is a contract and get its ERC20 name/symbol if applicable

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesEVM address (0x...)
chainIdNoChain ID — "43114" for C-Chain mainnet, "43113" for Fuji testnet43114
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully hints that ERC20 data is conditionally returned ('if applicable'), but fails to specify what the tool returns when an address is not a contract, whether this performs an RPC call, or if there are rate limit concerns. It adequately covers the primary function but leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no redundant words. It front-loads the action ('Check') and immediately follows with the value proposition. It loses a point for being slightly too terse for the tool's complexity—additional detail on returns or contract detection logic would improve utility without sacrificing clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of both annotations and an output schema, the description should ideally specify the return structure (e.g., boolean for is_contract, nullable fields for ERC20 data). It mentions the ERC20 fields but omits coverage of the contract detection boolean and fails to describe error cases or null returns, leaving the agent to infer behavior from the tool name alone.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description contextualizes the 'address' parameter as a potential contract address to be checked, which adds minor semantic value beyond the schema's generic 'EVM address' label. However, it does not elaborate on valid formats, checksum requirements, or the significance of the chainId default value beyond what the schema already states.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Check', 'get') and clearly identifies the target resource (contract address, ERC20 name/symbol). It distinguishes this from sibling tools like `blockchain_get_native_balance` by specifying contract detection and ERC20 metadata retrieval. However, 'if applicable' leaves ambiguity about the return structure when the address is not a contract or not an ERC20 token.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no explicit guidance on when to use this tool versus alternatives (e.g., when to call this instead of `blockchain_lookup_address`), nor does it mention prerequisites or conditions that would trigger the ERC20 path. The phrase 'if applicable' hints at conditional behavior but does not guide selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

blockchain_get_native_balanceAInspect

Get the native AVAX balance of an address on C-Chain

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesEVM address (0x...)
chainIdNoChain ID — "43114" for C-Chain mainnet, "43113" for Fuji testnet43114
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses the read operation but omits behavioral details like return value format, units (wei vs AVAX), error handling for invalid addresses, or mutation safety.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Nine words, single sentence. Every term earns its place: verb ('Get'), asset ('native AVAX balance'), target ('address'), and network constraint ('C-Chain'). No redundancy or wasted phrases.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple 2-parameter balance query with full schema coverage. Specifies the blockchain context (C-Chain) necessary to distinguish from platform operations. Lacks output description, but none is required given no output schema exists.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (both address and chainId fully documented). Description aligns with schema by mentioning C-Chain (matching the default chainId value) but adds minimal semantic meaning beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb 'Get', resource 'native AVAX balance', and scope 'on C-Chain'. The C-Chain specification crucially distinguishes it from sibling tool platform_get_balance (likely P-Chain), making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Specifies 'C-Chain' which implicitly scopes usage to the EVM-compatible chain, helping differentiate from platform_get_balance. However, lacks explicit when-to-use guidance or mention of the alternative for other chains.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

blockchain_lookup_addressAInspect

Look up an address — balance, contract info. Use when users paste an 0x address.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesThe address to look up (0x format)
chainIdNoChain ID — "43114" for C-Chain mainnet, "43113" for Fuji testnet43114
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Mentions output data types (balance, contract info) but omits safety profile (read-only vs write), error conditions, rate limits, or output format details. 'Look up' implies read-only but this is not explicit.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. First sentence establishes purpose and return value; second provides usage trigger. Appropriately front-loaded and sized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple 2-parameter lookup tool. Roughly describes return value (balance, contract info) compensating for lack of output schema. Minor gap in explaining relationship to more specific sibling tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage with clear parameter descriptions. Description reinforces the 0x format requirement and implies the address comes from user input, but adds minimal semantic value beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action (look up) and target (address) and mentions returned data (balance, contract info). However, fails to differentiate from siblings `blockchain_get_contract_info` and `blockchain_get_native_balance` which appear to offer more specific subsets of this functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides a clear trigger condition ('Use when users paste an 0x address') but lacks guidance on when to prefer this over specialized siblings or when not to use it (e.g., when only balance or only contract info is needed).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

blockchain_lookup_chainBInspect

Look up a blockchain by its ID — name, VM type, and subnet.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainIdYesThe blockchain ID
networkNomainnet
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, description carries full burden. It adds value by specifying return fields (name, VM type, subnet) not evident in the input schema. However, it omits network default behavior (mainnet), error handling, and whether lookup is case-sensitive for chain IDs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, 9 words. Front-loaded with action and resource. Em-dash efficiently signals return payload structure without verbosity. Zero waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple lookup tool: mentions return fields since no output schema exists. However, incomplete due to missing network parameter documentation and lack of error behavior description given zero annotation coverage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 50% (chainId described, network not). Description mentions 'by its ID' covering chainId, but completely omits the network parameter despite its behavioral significance (mainnet vs fuji). Should compensate for schema gaps but fails to document the enum parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'Look up' with specific resource 'blockchain'. The em-dash listing return fields (name, VM type, subnet) helps distinguish from sibling lookup tools like blockchain_lookup_address or blockchain_lookup_transaction. Could better clarify distinction from platform_get_blockchains (list vs. single item lookup).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this versus platform_get_blockchains (which lists all blockchains) or when lookup by ID is preferred. No mention of error conditions (e.g., invalid chain ID) or prerequisite steps.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

blockchain_lookup_subnetAInspect

Look up a Subnet / L1 by its ID — validators, chains, and configuration.

ParametersJSON Schema
NameRequiredDescriptionDefault
networkNomainnet
subnetIdYesThe Subnet ID
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, description carries full disclosure burden. It compensates well by previewing the returned data scope (validators, chains, configuration), helping agents know what to expect without an output schema. Could improve with mutation/safety notes (though 'look up' implies read-only).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Efficient single-sentence structure with em-dash pattern: action clause + return value hint. Front-loaded with verb, no waste. Appropriate density for the complexity level.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a 2-parameter lookup tool. Since no output schema exists, previewing return contents (validators, chains, configuration) is essential context that the description provides. Only gap is network parameter context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 50% (subnetId described, network not). Description mentions 'by its ID' reinforcing the required parameter, but does not compensate for the undocumented 'network' parameter (mainnet/fuji enum) despite having space to do so.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('Look up') and resource ('Subnet / L1'). The em-dash clarification showing it returns 'validators, chains, and configuration' distinguishes it from simple listing tools like platform_get_subnets. Minor gap: could explicitly contrast with the sibling list tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage via 'by its ID' (suggesting specific lookup vs enumeration), but lacks explicit when-to-use guidance relative to platform_get_subnets or blockchain_lookup_chain. No prerequisites or error conditions mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

blockchain_lookup_transactionAInspect

Look up a transaction by hash on Avalanche (C-Chain, P-Chain, or X-Chain). Supports 0x format (C-Chain) and CB58 format (P/X-Chain).

ParametersJSON Schema
NameRequiredDescriptionDefault
txHashYesTransaction hash (0x... for C-Chain, CB58 for P/X-Chain)
networkNoNetwork to searchmainnet
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With zero annotations, description carries full burden. Adds chain-specific format validation behavior (0x for C-Chain, CB58 for P/X-Chain). However, fails to confirm read-only nature (implied by 'look up' but not explicit), missing return structure, error behavior, or rate limits given no output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. First sentence establishes core operation and scope; second sentence delivers critical chain-specific formatting requirements. Front-loaded with essential distinguishing information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a 2-parameter lookup tool with simple schema. Lacks return value description which would help given no output schema exists, but covers essential input requirements and chain variations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with txHash and network fully documented. Description reinforces format expectations (0x/CB58) but largely repeats schema content rather than adding syntax details or validation rules beyond parameter descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Look up' with clear resource 'transaction by hash'. Explicitly scopes to Avalanche and distinguishes sub-chains (C/P/X), differentiating from siblings like blockchain_lookup_address, blockchain_lookup_validator, and platform_get_tx.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage via hash format guidance (0x vs CB58), but lacks explicit when-to-use versus alternatives like platform_get_tx or when searching by address instead of hash. No prerequisites or exclusions stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

blockchain_lookup_validatorAInspect

Look up a validator by node ID — stake, uptime, delegation info.

ParametersJSON Schema
NameRequiredDescriptionDefault
nodeIdYesNode ID (e.g. NodeID-...)
networkNomainnet
subnetIdNoSubnet ID (default: Primary Network)11111111111111111111111111111111LpoYY
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Adds useful context about what data is returned (stake, uptime, delegation info), but omits safety profile (read-only implied but not stated), error handling for non-existent nodeIDs, or rate limiting concerns.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single efficient sentence front-loaded with action and resource. Em-dash append efficiently communicates return payload without verbosity. Zero waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 3 simple parameters (1 required) and no output schema, description adequately covers primary functionality and return data categories. Only minor gap is lack of error-handling mention (e.g., 'returns null if not found').

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 67% (nodeId and subnetId have descriptions; network lacks description). Description adds no explicit parameter guidance, so baseline 3 applies for adequate schema coverage without additional semantic context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('Look up') + resource ('validator') with specific scope ('by node ID'). Lists specific data returned (stake, uptime, delegation). Distinguishes from sibling transaction/address lookups by specifying 'validator', though doesn't explicitly differentiate from platform_get_current_validators list operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies single-record lookup pattern via 'by node ID', contrasting with bulk validators endpoints. However, lacks explicit when-to-use guidance versus platform_get_current_validators or handling for missing validators.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

docs_fetchBInspect

Fetch a specific documentation page as markdown

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe page URL path (e.g., /docs/primary-network/overview)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are absent, so description carries full burden. It successfully discloses the return format (markdown) which is critical given no output schema. However, it omits other behavioral traits: no mention of network failure handling, caching, authentication requirements, or validation of the URL parameter.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no redundant words. Front-loaded with active verb ('Fetch'). Every word earns its place: specifies resource type ('documentation page'), specificity ('specific'), and output format ('markdown').

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the low complexity (1 parameter, 100% schema coverage, no nested objects), the description is adequate. It compensates for the missing output schema by stating the markdown format. Could be improved by mentioning error cases (e.g., 404 for invalid URLs).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage with a clear example path, establishing baseline 3. The description adds implicit context that the URL should point to documentation pages, but provides no additional semantic details about the parameter format, validation rules, or constraints beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States a clear verb+resource ('Fetch a specific documentation page') and output format ('as markdown'). Implicitly distinguishes from sibling tools like docs_search and docs_list_sections by emphasizing 'specific' page, but does not explicitly name siblings or contrast use cases.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus docs_search (for queries) or docs_list_sections (for browsing). No mention of prerequisites or error conditions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

docs_list_sectionsBInspect

List available documentation sections and their page counts

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, placing full disclosure burden on the description. It mentions 'page counts' indicating what data is returned, but fails to disclose read-only safety, return format structure, pagination behavior, or rate limits. The phrase 'List' implies read-only behavior, but explicit confirmation is absent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of a single sentence that is front-loaded with the action verb. It efficiently conveys the tool's function without redundancy or unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While the tool has zero parameters (low complexity), there is no output schema provided. The description mentions 'page counts' but does not describe the return structure (e.g., array of objects with section names and counts) or reference related tools like docs_fetch for retrieving actual content. Given the sibling tool richness, this feels incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters. Per the rules, 0 parameters establishes a baseline score of 4. The description appropriately does not invent parameter semantics where none exist.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('List') and identifies the resource ('documentation sections') along with a specific attribute ('page counts'). While it clearly distinguishes itself as an indexing/overview tool compared to the sibling 'docs_fetch' and 'docs_search' tools, it does not explicitly name those siblings or clarify when to choose this over them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus its documentation-related siblings (docs_fetch, docs_search). It does not mention prerequisites, typical use cases (e.g., browsing before fetching), or conditions where this would be preferred over querying specific content.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

github_get_fileBInspect

Retrieve the contents of a specific file from an Avalanche GitHub repository.

ParametersJSON Schema
NameRequiredDescriptionDefault
refNoThe git ref (branch, tag, or commit SHA) to fetch from. Defaults to "HEAD".
pathYesThe path to the file within the repository.
repoYesThe repository to fetch the file from (owner is always ava-labs).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full disclosure burden. It states the core action (retrieve contents) but omits key behavioral details: whether content is returned raw or decoded, how binary files are handled, 404 error behavior, and rate limit implications. 'Avalanche GitHub repository' provides domain context but not operational characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence delivers specific verb (retrieve), object (file contents), and constraint (Avalanche repos). No wasted words, front-loaded with the essential action. Appropriate length for a simple 3-parameter retrieval tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity and excellent schema coverage, the description is minimally sufficient. However, lacking both annotations and output schema, it should ideally indicate return format (string vs object) or error behavior. Sibling differentiation from github_search_code is implied ('specific file' vs search) but not explicit.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed descriptions for all three parameters (ref, path, repo). The description adds domain framing ('Avalanche GitHub repository') which contextualizes the repo enum values (avalanchego, icm-services, builders-hub), but the schema already constrains these explicitly. Baseline 3 appropriate when schema does heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb+resource: 'Retrieve the contents of a specific file'. Scope constraint 'Avalanche GitHub repository' distinguishes from generic GitHub tools and blockchain lookup siblings. Distinguishes from github_search_code via 'specific file' (direct retrieval vs search), though explicit contrast would strengthen further.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to prefer this over github_search_code (use when path known vs discovery) or docs_fetch (raw GitHub content vs processed docs). No prerequisites or alternative recommendations mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

github_search_codeAInspect

Search for code across Avalanche GitHub repositories (avalanchego, icm-services, builders-hub).

ParametersJSON Schema
NameRequiredDescriptionDefault
repoNoThe repository to search in. Defaults to "all".
queryYesThe search query string to find relevant code.
languageNoFilter results by programming language. Defaults to "any".
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry full behavioral disclosure. It specifies which repositories are searched but fails to disclose result format (code snippets vs. file paths), pagination behavior, rate limits, or whether the search supports GitHub-specific query syntax. This leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with zero waste. Front-loaded with the action verb, specific scope in parentheses efficiently lists the repositories. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the rich input schema (100% coverage), the description appropriately delegates parameter details to the schema. However, with no output schema provided, the description should ideally disclose what the search returns (file references, snippets, etc.). As-is, it minimally meets requirements but leaves output behavior unexplained.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description redundantly lists the repository enum values (avalanchego, icm-services, builders-hub), which confirms scope but doesn't add syntax details, query format guidance, or parameter interdependencies beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a specific verb ('Search'), clear resource ('code'), and explicit scope ('across Avalanche GitHub repositories' naming the three specific repos). It clearly distinguishes from sibling github_get_file (which retrieves specific files) and from the many blockchain query tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description defines the scope by listing target repositories, implying the tool's domain. However, it lacks explicit guidance on when to use this versus github_get_file or docs_search (e.g., 'use this to find code patterns, use github_get_file to read specific files').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

info_acpsBInspect

Get information about Avalanche Community Proposals (ACPs), including their status and vote counts.

ParametersJSON Schema
NameRequiredDescriptionDefault
networkNoThe Avalanche network to query (default: mainnet)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses what data is returned (status, vote counts) but lacks critical behavioral context: no mention of caching, real-time vs archived data, pagination for large result sets, or authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with zero waste. Front-loaded with clear verb and resource identification. Every clause earns its place by identifying both the domain (ACPs) and specific data fields returned.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple single-parameter tool, reasonably complete. Mentions key return values (status, vote counts) compensating for lack of output schema. Would benefit from noting whether this retrieves current open proposals or historical archived ones.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (network parameter fully documented). Description does not add parameter-specific semantics, but with complete schema coverage, baseline 3 is appropriate. Description focuses on output rather than input semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear resource identification (Avalanche Community Proposals) and specific data points returned (status, vote counts). Verb phrase 'Get information' is slightly generic but sufficient. Distinguishes from blockchain/validator siblings by specifying governance proposal domain.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus other info tools or platform tools. No mention of prerequisites like needing to know proposal IDs, or whether this lists all ACPs vs requiring specific queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

info_get_blockchain_idBInspect

Get the CB58-encoded blockchain ID for a given blockchain alias (e.g., "X", "P", "C").

ParametersJSON Schema
NameRequiredDescriptionDefault
aliasYesThe blockchain alias (e.g., "X", "P", "C", or a full blockchain name)
networkNoThe Avalanche network to query (default: mainnet)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. Explains output encoding (CB58) which is useful behavioral context not in schema. However, omits error handling (what happens if alias invalid?), rate limits, or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single dense sentence with zero waste. Front-loaded action verb, parenthetical examples placed appropriately, no filler words. Every token earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriate for a simple 2-parameter lookup tool. Explains what is returned (CB58-encoded ID). Given no output schema, could mention error response format, but acceptable for simple read-only utility function.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage documenting both 'alias' and 'network' parameters. Description reinforces the 'alias' concept but adds no semantic depth beyond schema (e.g., doesn't explain that network defaults to mainnet, though schema does). Baseline 3 appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb 'Get' and resource 'blockchain ID' with encoding format 'CB58'. Examples ('X', 'P', 'C') clarify expected input format. However, lacks explicit differentiation from sibling tools like 'blockchain_lookup_chain' or 'platform_get_blockchains'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides input examples but contains no explicit when-to-use guidance, prerequisites, or alternatives. Does not clarify why one should use this Info API tool versus Platform API or Blockchain API siblings for chain-related queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

info_get_network_idAInspect

Get the numeric ID of the Avalanche network this node is participating in.

ParametersJSON Schema
NameRequiredDescriptionDefault
networkNoThe Avalanche network to query (default: mainnet)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses return type (numeric ID) and implies read-only operation, but lacks annotations to confirm safety. Does not clarify relationship between 'this node' phrasing and the network parameter, nor what numeric ID values look like.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single efficient sentence front-loaded with the action. No redundant words or unnecessary explanation given the straightforward nature of the tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple getter with one optional parameter and full schema coverage. No output schema exists but description hints at return type (numeric ID), which is sufficient for this complexity level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage with clear enum values and default, so baseline 3 is appropriate. Description adds no parameter-specific guidance but doesn't need to compensate given complete schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Get' + resource 'numeric ID of the Avalanche network', and distinguishes from sibling info_get_network_name by specifying the numeric variant. Clear scope despite 'this node' phrasing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus info_get_network_name or other info_* tools. No mention of prerequisites or when the optional parameter should be provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

info_get_network_nameAInspect

Get the human-readable name of the Avalanche network this node is participating in.

ParametersJSON Schema
NameRequiredDescriptionDefault
networkNoThe Avalanche network to query (default: mainnet)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. It clarifies the return value is a 'human-readable name' (string), distinguishing from numeric identifiers. However, lacks explicit read-only declaration, error conditions, or confirmation this queries the local node despite 'this node' implying locality.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single 12-word sentence with active verb front-loaded. Zero redundancy or filler. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple getter with one optional parameter. Clarifies return value nature (human-readable). Minor gap: could specify return type explicitly or error behavior, but sufficient for tool selection given schema completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage with the 'network' parameter fully documented (enum values, default). Description mentions no parameters, but with complete schema coverage and only one optional parameter, the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Get' with clear resource 'human-readable name' and scope 'Avalanche network this node is participating in.' Critically, it distinguishes from sibling tool info_get_network_id by emphasizing 'human-readable name' (implying the sibling returns a numeric ID).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this versus info_get_network_id (the numeric alternative) or other network info tools. No mention of prerequisites or typical use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

info_get_node_versionAInspect

Get the version of the node, including the database version, git commit, and API compatibility info.

ParametersJSON Schema
NameRequiredDescriptionDefault
networkNoThe Avalanche network to query (default: mainnet)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It successfully discloses what data is returned (database version, git commit, API compatibility info), compensating for the missing output schema. However, it omits safety indicators (read-only vs destructive), authentication requirements, or performance characteristics that annotations would typically cover.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single efficient sentence with zero waste. It is appropriately front-loaded with the main action ('Get the version of the node') followed by specific details of what's included. It avoids repetition of schema elements while remaining informative.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity (1 optional parameter) and high schema coverage, the description appropriately focuses on compensating for the missing output schema by detailing the return payload components. It provides sufficient context for an agent to understand what information will be retrieved without being verbose.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for the single 'network' parameter, the baseline is appropriately set at 3. The description adds no specific parameter guidance beyond what's in the schema, but it doesn't need to given the comprehensive schema documentation including the enum values and default.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Get' and resource 'version of the node', and details what the version includes (database version, git commit, API compatibility). While it implicitly distinguishes from network-level siblings like info_get_network_name by specifying 'node', it lacks explicit differentiation from similar info_* tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Usage is implied by the stated purpose—use this when you need node version information including build details. However, there are no explicit when/when-not guidelines, no mention of prerequisites, and no comparison to related tools like info_get_network_id that might be confused with this one.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

info_get_tx_feeBInspect

Get the current transaction fees for the network, returned in both nAVAX and AVAX.

ParametersJSON Schema
NameRequiredDescriptionDefault
networkNoThe Avalanche network to query (default: mainnet)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, description carries full burden. It discloses the return format (dual units: nAVAX and AVAX) which is valuable behavioral context. However, omits other behavioral traits like whether fees are dynamic/static, caching behavior, or rate limit implications of querying current fees.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with action verb. Every element earns its place: 'current' implies time-sensitive data, 'nAVAX and AVAX' clarifies return units without output schema. No redundancy or waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriate for a simple getter with one optional parameter. Effectively substitutes for missing output schema by specifying return units. Would benefit from noting this retrieves base transaction fee rates for the Avalanche network, but sufficient given the schema completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (network parameter fully documented in schema with enum and description). Description mentions 'the network' implying the parameter but adds no semantic detail beyond what the schema provides. Baseline 3 is appropriate when schema documentation is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb (Get) and resource (transaction fees) with scope (current). Mentions specific return formats (nAVAX/AVAX) which implicitly identifies this as Avalanche-specific. Distinguishes from siblings like info_get_network_id or platform_get_tx by domain, though lacks explicit comparative guidance.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this versus other fee-related tools or network info tools. No mention of prerequisites (e.g., needing to know which network to query) or typical workflow (e.g., checking fees before submitting transactions).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

info_is_bootstrappedBInspect

Check whether a given chain has finished bootstrapping on the node.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainYesThe chain ID or alias to check (e.g., "X", "P", "C")
networkNoThe Avalanche network to query (default: mainnet)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It fails to disclose what 'bootstrapping' entails (initial sync process), whether this is a lightweight check or expensive operation, what return type to expect (boolean), or error conditions if the chain doesn't exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no redundancy, front-loaded with the action verb and object. Every word earns its place; the structure is appropriate for a simple boolean check tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With only two simple parameters and no output schema, the description adequately covers the core concept but lacks completeness regarding the significance of bootstrap status in the Avalanche context or hints about the return value structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with clear descriptions for both 'chain' (including examples) and 'network' (with enum values). The description does not add parameter-specific semantics, but with complete schema coverage, this meets the baseline expectation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Check') with a clear resource ('whether a given chain has finished bootstrapping'), and distinguishes itself from siblings by focusing specifically on bootstrap completion status rather than general chain info or node metadata.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description states what the tool does but provides no guidance on when to use it versus alternatives like info_get_network_id or platform_get_blockchains, nor does it mention prerequisites such as requiring an active node connection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

info_peersAInspect

Get a list of peers this node is connected to. Optionally filter by specific NodeIDs.

ParametersJSON Schema
NameRequiredDescriptionDefault
networkNoThe Avalanche network to query (default: mainnet)
nodeIDsNoOptional list of NodeIDs to filter peers by (e.g., ["NodeID-AbC..."]). If omitted, all peers are returned.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. It explains the optional filtering behavior but omits confirmation that this is read-only (though implied by 'Get'), details about what peer data is returned, or whether this queries the local node or requires network hops.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences with zero waste. First sentence establishes core purpose, second clarifies optional capability. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given only 2 optional parameters with 100% schema coverage and no output schema, the description adequately covers the tool's purpose. Could improve by mentioning the read-only nature or data freshness, but sufficient for a simple list retrieval operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage with detailed field descriptions. The description adds minimal semantic value beyond the schema, only reinforcing that nodeIDs filtering is optional, which aligns with the baseline score for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Get' with clear resource 'peers this node is connected to'. The term 'peers' clearly distinguishes this from sibling tools like info_get_network_id or info_get_tx_fee which retrieve different network metadata.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Mentions 'Optionally filter' indicating the filter parameter is not required, but lacks explicit guidance on when to use this vs other network inspection tools or performance considerations for querying all peers.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

platform_get_balanceBInspect

Get the AVAX balance of one or more P-Chain addresses

ParametersJSON Schema
NameRequiredDescriptionDefault
networkNoAvalanche network to query (default: mainnet)
addressesYesList of P-Chain addresses to query (e.g. P-avax1...)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full disclosure burden. While 'Get' implies a read operation, the description fails to confirm read-only safety, idempotency, rate limits, or what the return value represents (e.g., atomic units vs AVAX), leaving significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence with zero waste. It immediately identifies the operation, asset, and target without preamble or redundant phrases, achieving optimal information density.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple two-parameter read operation with complete schema coverage, the description is minimally adequate. However, given the absence of annotations and output schema, it should disclose behavioral characteristics (read-only safety, return format) to be fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description adds minimal semantic value beyond the schema—mentioning 'AVAX' and reinforcing the array nature with 'one or more'—but does not explain parameter interactions or provide examples beyond what the schema already includes.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and resource ('AVAX balance') and specifies the scope ('one or more P-Chain addresses'). The 'P-Chain' qualifier effectively distinguishes this from sibling tools like blockchain_get_native_balance, though it doesn't explicitly contrast with them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage through the 'P-Chain' qualifier, signaling this is for Platform Chain addresses specifically. However, it lacks explicit when-to-use guidance or prerequisites (e.g., address format requirements) that would help an agent choose between this and other balance lookup tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

platform_get_blockCInspect

Get a P-Chain block by its block ID

ParametersJSON Schema
NameRequiredDescriptionDefault
blockIDYesThe CB58-encoded block ID
networkNoAvalanche network to query (default: mainnet)
encodingNoEncoding format for the block (default: json)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the full behavioral burden. It discloses only the basic read operation but fails to mention error conditions (e.g., invalid block ID), return value structure, rate limits, or authentication requirements. The 'Get' implies read-only but doesn't confirm safety or idempotency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence of nine words with the verb front-loaded. No words are wasted, though the brevity comes at the cost of omitting valuable behavioral and contextual information. It is appropriately sized for what it contains.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple parameter structure (3 primitives, no nesting), 100% schema coverage, and lack of output schema, the description provides the minimum viable context to identify the tool's function. However, for a blockchain query tool with specific encoding options and network variants, it lacks explanation of return formats and domain-specific context that would aid an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already fully documents all three parameters (blockID, network, encoding) including their formats and enums. The description mentions 'block ID' which maps to the parameter, but adds no additional semantic value beyond what the schema provides. Baseline score applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and resource ('P-Chain block'), and includes 'by its block ID' which distinguishes it from the sibling tool platform_get_block_by_height. The 'P-Chain' specificity also differentiates it from general blockchain tools. However, it could explicitly name the alternative lookup method.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no explicit guidance on when to use this tool versus alternatives like platform_get_block_by_height. While 'by block ID' implies usage when an ID is known, there are no explicit when/when-not statements or prerequisites mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

platform_get_block_by_heightAInspect

Get a P-Chain block by its height

ParametersJSON Schema
NameRequiredDescriptionDefault
heightYesThe block height as a string
networkNoAvalanche network to query (default: mainnet)
encodingNoEncoding format for the block (default: json)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully identifies the domain ('P-Chain') but omits operational details like error handling (e.g., invalid height), return value structure, or rate limiting considerations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with the action verb, zero redundancy. Every word earns its place for a simple lookup operation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a read-only lookup tool with three simple parameters and complete schema documentation. Specifies the blockchain domain (P-Chain) and lookup method; absence of output schema description is acceptable given the intuitive 'get' naming convention.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description references 'its height,' aligning with the required 'height' parameter, but adds no additional syntax or usage context beyond what the schema already documents.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('Get'), specific resource ('P-Chain block'), and explicit scoping ('by its height') that distinguishes it from sibling 'platform_get_block' (which presumably retrieves by hash/ID).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implicitly signals when to use by specifying 'by its height,' indicating this is appropriate when the caller knows the block height versus other identifiers. However, it doesn't explicitly name the alternative (platform_get_block) or state when NOT to use this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

platform_get_blockchainsCInspect

Get all blockchains that exist on the P-Chain

ParametersJSON Schema
NameRequiredDescriptionDefault
networkNoAvalanche network to query (default: mainnet)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure, but it fails to mention read-only safety, return format, potential data volume, pagination behavior, or what specific blockchain metadata is retrieved.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence is efficient and front-loaded with the action verb. No words are wasted, though given the lack of annotations, an additional sentence describing behavioral traits or usage context would have been appropriate.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with one optional parameter and no output schema, the description minimally suffices by identifying the resource and location (P-Chain). However, it lacks essential behavioral context that would help an agent understand the response structure or data payload.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description adds no parameter-specific context beyond what the schema already documents for the 'network' field.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses the specific verb 'Get' with the resource 'all blockchains' and locates them 'on the P-Chain', providing clear scope. However, it does not explicitly differentiate from sibling tools like `blockchain_lookup_chain` or `platform_get_subnets`.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives such as `blockchain_lookup_chain` (which implies lookup by identifier) or when querying specific subnets. There are no stated prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

platform_get_current_supplyBInspect

Get the current total supply of AVAX on a subnet

ParametersJSON Schema
NameRequiredDescriptionDefault
networkNoAvalanche network to query (default: mainnet)
subnetIDNoThe subnet ID to query current supply for (default: Primary Network)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. 'Get' implies read-only but doesn't confirm safety, caching behavior, rate limits, or whether supply is returned in base units or AVAX denomination. No mention of error conditions for invalid subnetIDs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with action verb. Zero redundancy. Every word earns its place—specifies operation, asset (AVAX), scope (subnet), and metric (total supply).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple 2-parameter read operation with 100% schema coverage. Description explains the core purpose sufficiently given the lack of output schema, though could note the return format (e.g., numeric string).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with complete descriptions for both network (enum) and subnetID parameters. Description mentions 'subnet' in the text which aligns with the subnetID parameter, but adds no semantic detail beyond what the schema already provides. Baseline 3 appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Get' with clear resource 'current total supply of AVAX on a subnet'. Specifies AVAX asset and subnet scope, distinguishing from siblings like platform_get_total_stake or platform_get_balance. Loses point for not explicitly differentiating 'supply' vs 'stake' concepts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives (e.g., platform_get_total_stake or blockchain_get_native_balance). No mention of prerequisites like knowing the subnetID format/mainnet vs fuji implications.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

platform_get_current_validatorsCInspect

Get the current validators of a subnet

ParametersJSON Schema
NameRequiredDescriptionDefault
networkNoAvalanche network to query (default: mainnet)
nodeIDsNoOptional list of node IDs to filter by
subnetIDNoThe subnet ID to query validators for (default: Primary Network)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full disclosure burden but reveals nothing about return values, pagination behavior, rate limits, or whether this is read-only (implied but not explicit). It does not explain what constitutes 'current' in the context of blockchain finality.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with the action and object. Every word earns its place; however, given the tool's complexity (3 parameters, no output schema, similar siblings), the description may be too terse to be truly 'appropriately sized' for the agent's needs.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Incomplete for a query tool with no output schema or annotations. Fails to indicate return structure, filtering capabilities (nodeIDs), or the significance of the 'current' temporal state. Does not leverage the full information available in the parameter schema to guide the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description mentions 'subnet' which aligns with the subnetID parameter, but adds no semantic clarifications beyond the schema's existing descriptions for network, nodeIDs, or subnetID defaults.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States a specific action (Get) and resource (current validators of a subnet), clarifying scope. However, it fails to explicitly differentiate from temporal siblings like 'platform_get_pending_validators' or 'platform_get_validators_at', though the word 'current' implies active status.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives like 'platform_get_pending_validators' or 'platform_get_validators_at', nor does it mention prerequisites such as knowing the subnet ID or network selection criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

platform_get_heightBInspect

Get the current P-Chain block height

ParametersJSON Schema
NameRequiredDescriptionDefault
networkNoAvalanche network to query (default: mainnet)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. The verb 'Get' implies a read-only, safe operation, but the description omits details about the return format, rate limiting, or whether the height value is cached versus real-time.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence is front-loaded with the action and contains zero redundant words. Every term ('current', 'P-Chain', 'block height') serves a distinct semantic purpose, though extreme brevity sacrifices behavioral context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter read operation with complete schema documentation, the description adequately identifies the operation's target. However, lacking an output schema, it should ideally characterize the return value (integer vs object) to be fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for the 'network' parameter, the baseline score applies. The tool description adds no parameter guidance, but the schema sufficiently documents the optional enum values and defaults without requiring additional elaboration.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states a clear verb ('Get') and specific resource ('P-Chain block height'). The inclusion of 'current' effectively distinguishes this from the sibling tool platform_get_block_by_height, though it could more explicitly contrast their use cases.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While mentioning 'P-Chain' provides domain context, the description offers no explicit guidance on when to use this tool versus platform_get_block_by_height or other platform getters, nor does it mention prerequisites or typical query patterns.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

platform_get_min_stakeBInspect

Get the minimum staking amounts for validators and delegators on a subnet

ParametersJSON Schema
NameRequiredDescriptionDefault
networkNoAvalanche network to query (default: mainnet)
subnetIDNoThe subnet ID to query minimum stake for (default: Primary Network)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It implies read-only behavior via 'Get' and signals that multiple values are returned ('amounts' plural for both validators and delegators). However, it lacks disclosure of return format, units (e.g., nAVAX), or whether the values are static constants or dynamic parameters.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single, well-formed sentence with the action verb front-loaded. No redundant words or tautology. However, the sentence is minimal and could benefit from a second sentence clarifyng the return value without sacrificing clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 2-parameter query tool with simple string types, the description covers the query intent adequately. However, lacking an output schema, the omission of any description of the return structure (e.g., 'returns an object with validator and delegator minimums') leaves a notable gap in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. The description mentions 'on a subnet' which implicitly reinforces the subnetID parameter, but adds no further semantic context about the network parameter or the relationship between the two parameters that isn't already clear from the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the specific resource (minimum staking amounts) and scope (validators and delegators on a subnet) using the specific verb 'Get'. However, it does not explicitly differentiate from similar sibling tools like platform_get_total_stake or platform_get_staking_asset_id.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this versus other stake-querying tools, no mention of prerequisites (e.g., knowing the subnetID), and no warnings about when it might be inappropriate to call.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

platform_get_pending_validatorsAInspect

Get the pending validators of a subnet (validators not yet validating)

ParametersJSON Schema
NameRequiredDescriptionDefault
networkNoAvalanche network to query (default: mainnet)
nodeIDsNoOptional list of node IDs to filter by
subnetIDNoThe subnet ID to query pending validators for (default: Primary Network)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses the key behavioral trait (pending = not yet validating) but omits other important aspects: read-only nature, error conditions if subnet doesn't exist, return value structure, or rate limits. Captures minimum viable behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence of 11 words with front-loaded verb. Zero冗余. The parenthetical efficiently clarifies domain terminology without bloating the text. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a read-only query with well-documented parameters, but gaps remain: no output schema exists yet the description doesn't summarize return values, and the critical distinction from 'platform_get_current_validators' is left implicit rather than explicit. Minimum viable completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (all 3 parameters documented). The description mentions 'subnet' which loosely references the subnetID parameter, but adds no semantic information beyond what the schema already provides. Baseline 3 is appropriate when schema carries the full documentation load.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') + resource ('pending validators') + scope ('of a subnet'). The parenthetical '(validators not yet validating)' precisely distinguishes this from the sibling tool 'platform_get_current_validators' by defining the domain-specific 'pending' state.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by defining what 'pending' means (not yet validating), which hints at when to use this vs current validators. However, it lacks explicit guidance such as 'Use this to check upcoming validators; for active validators, use platform_get_current_validators' or prerequisites like needing a valid subnetID.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

platform_get_staking_asset_idBInspect

Get the asset ID of the token used for staking on a subnet

ParametersJSON Schema
NameRequiredDescriptionDefault
networkNoAvalanche network to query (default: mainnet)
subnetIDNoThe subnet ID to query the staking asset for (default: Primary Network)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. While 'Get' implies a read-only operation, it fails to confirm this explicitly, describe the return format (e.g., CB58-encoded string), mention potential errors (e.g., invalid subnet), or indicate if the call is idempotent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence of 12 words that front-loads the action. There is no redundant information or unnecessary verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While adequate for a simple lookup tool with well-documented parameters, the description lacks behavioral context and return value details that would be necessary given the absence of annotations and output schema. It meets minimum viability but has clear gaps for a blockchain query tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description mentions 'subnet' which aligns with the subnetID parameter, but adds no semantic context beyond the schema (e.g., explaining that 'Primary Network' is the default subnet for the main chain, or defining valid network values).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a clear verb ('Get') and specific resource ('asset ID of the token used for staking'), along with scope ('on a subnet'). It implicitly distinguishes from sibling staking tools (e.g., platform_get_total_stake) by specifying 'asset ID' rather than amounts, though it does not explicitly compare against alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives (e.g., platform_get_total_stake), nor does it mention prerequisites like needing a valid subnet ID beforehand, or that it queries the P-Chain.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

platform_get_subnetsCInspect

Get information about subnets on the P-Chain

ParametersJSON Schema
NameRequiredDescriptionDefault
idsNoOptional list of subnet IDs to filter by
networkNoAvalanche network to query (default: mainnet)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. While 'Get' implies read-only and 'P-Chain' specifies the target network, the description fails to disclose what specific subnet information is returned, pagination behavior, or whether the call is idempotent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no wasted words. Front-loaded with verb and resource. However, extreme brevity comes at the cost of omitting behavioral details and return value descriptions that would aid agent decision-making.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Simple 2-parameter tool with complete schema coverage, but no output schema exists. The description fails to compensate by describing what subnet information is returned (e.g., validators, staking requirements, ownership), leaving agents unaware of the tool's utility without invoking it.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (both 'ids' and 'network' are well-documented in the schema), establishing baseline 3. The description mentions 'subnets' generally but adds no specific syntax guidance, validation rules, or format details beyond the schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb (Get) and resource (subnets) with specific scope (P-Chain). However, it does not differentiate from sibling tool 'blockchain_lookup_subnet' or explain why one would use this specific platform tool versus the blockchain lookup alternative.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives like 'blockchain_lookup_subnet' or 'platform_get_blockchains'. Does not indicate whether 'ids' filter is required or what happens when omitted.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

platform_get_total_stakeCInspect

Get the total amount staked on a subnet

ParametersJSON Schema
NameRequiredDescriptionDefault
networkNoAvalanche network to query (default: mainnet)
subnetIDNoThe subnet ID to query total stake for (default: Primary Network)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full disclosure burden. It fails to indicate this is a read-only query (though implied by 'Get'), does not describe the return value format (wei/AVAX? string/number?), and omits error conditions for invalid subnet IDs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single-sentence description is front-loaded and contains no filler words. However, its extreme brevity (8 words) sacrifices necessary context about return types and usage patterns, making it efficient but under-informative.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a two-parameter query tool with simple scalars, the description covers the basic intent but leaves gaps regarding the output format (critical given no output schema exists) and the specific aggregation methodology (active validators only? pending?). It meets minimum viability but lacks richness expected for blockchain state queries.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already fully documents both parameters (network and subnetID). The description adds no additional parameter semantics (e.g., explaining that subnetID defaults to Primary Network), meeting the baseline for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') with a clear resource ('total amount staked') and scope ('on a subnet'). It distinguishes from siblings like platform_get_balance (account-level) and platform_get_min_stake (requirement thresholds), though it could clarify whether this includes validator stake, delegations, or both.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives (e.g., platform_get_current_validators for validator details) or prerequisites (e.g., needing a valid subnetID from platform_get_subnets). It lacks explicit selection criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

platform_get_txCInspect

Get a P-Chain transaction by its transaction ID

ParametersJSON Schema
NameRequiredDescriptionDefault
txIDYesThe CB58-encoded transaction ID
networkNoAvalanche network to query (default: mainnet)
encodingNoEncoding format for the transaction (default: json)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Only states it 'gets' the transaction with no disclosure of return format, error behavior (e.g., 'not found' cases), rate limits, or authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no wasted words and front-loaded information. However, extreme brevity limits utility—appropriate length for the content provided, but content itself is minimal.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Retrieval tool with no output schema and no annotations requires behavioral description to be complete. Description lacks return value expectations, error handling, or payload format details needed for agent to handle response correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. Description mentions 'transaction ID' corresponding to txID parameter, but adds no semantic context beyond what schema already provides for network/encoding enums or CB58 format details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific resource (P-Chain transaction) and lookup method (by transaction ID), with 'P-Chain' helping distinguish from C-Chain tools. However, fails to differentiate from similar siblings like `platform_get_tx_status` or `blockchain_lookup_transaction`.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this versus similar transaction lookup tools (e.g., `platform_get_tx_status` for status checks vs full transaction data). No prerequisites or error conditions mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

platform_get_tx_statusCInspect

Get the status of a P-Chain transaction

ParametersJSON Schema
NameRequiredDescriptionDefault
txIDYesThe CB58-encoded transaction ID
networkNoAvalanche network to query (default: mainnet)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but fails to specify what status values are returned (e.g., 'Accepted', 'Pending', 'Unknown'), error handling behavior for invalid TXIDs, or whether results are cached.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise single sentence with no redundancy or filler. However, front-loading the critical distinction between status retrieval and full transaction retrieval would improve utility without significantly increasing length.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple two-parameter query tool, but lacks essential output specification given no output_schema exists. Should enumerate possible status return values or response structure to be complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with clear documentation for 'txID' (CB58-encoded) and 'network' (mainnet/fuji). The tool description adds no additional parameter semantics, but baseline 3 is appropriate given the schema comprehensively documents both fields.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States a clear verb ('Get') and specific resource ('status of a P-Chain transaction'). However, it fails to distinguish from sibling tool 'platform_get_tx' which likely retrieves the full transaction details rather than just the status, leaving ambiguity about which to use.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives like 'platform_get_tx' or 'blockchain_lookup_transaction'. No prerequisites or conditions are mentioned, forcing the agent to guess based on naming conventions alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

platform_get_utxosCInspect

Get UTXOs that reference a given set of P-Chain addresses

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of UTXOs to return
networkNoAvalanche network to query (default: mainnet)
addressesYesList of P-Chain addresses to get UTXOs for
sourceChainNoIf fetching atomic UTXOs, the chain they were exported from
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, yet description fails to disclose read-only nature, return format/structure, or error conditions (e.g., invalid P-Chain addresses). The 'reference' relationship is mentioned but behavioral traits like pagination defaults are absent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single 9-word sentence is efficiently front-loaded with the action verb. No redundancy, though brevity crosses into underspecification for a 4-parameter blockchain query tool with behavioral complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Lacks output schema and fails to describe return value structure (UTXO format), pagination behavior, or P-Chain specific context. With 4 parameters and no annotations, the description should explain UTXO concepts and atomic transaction workflows.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with complete parameter descriptions. The description adds minimal semantic value beyond the schema—only the 'reference' relationship is added context, while parameter specifics (enum values, atomic UTXO conditions) remain in schema only.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Uses specific verb 'Get' with resource 'UTXOs' and specifies 'P-Chain' context, distinguishing from C-Chain operations in sibling tools like blockchain_get_native_balance. However, it doesn't explain what UTXOs represent or why an agent needs them versus simply getting balances.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this versus platform_get_balance or when the sourceChain parameter is required. No mention of atomic UTXO use cases or pagination strategies with the limit parameter.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

platform_get_validators_atAInspect

Get the validators and their weights of a subnet at a given P-Chain height

ParametersJSON Schema
NameRequiredDescriptionDefault
heightYesThe P-Chain height to query validators at, or "proposed"
networkNoAvalanche network to query (default: mainnet)
subnetIDNoThe subnet ID to query validators for (default: Primary Network)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, description carries full burden. Adds valuable return context by mentioning 'weights' (not obvious from name alone), but omits safety profile (read-only status), error behaviors, or pagination details for potentially large validator lists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with zero waste. Every term ('validators', 'weights', 'subnet', 'P-Chain height') conveys specific semantic value appropriate to the tool's scope. Front-loaded and appropriately sized for a 3-parameter query operation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a query tool with complete input schema. Mentioning 'weights' partially compensates for missing output schema, but lacks safety disclosures (read-only confirmation) that would be expected given zero annotation coverage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with complete parameter descriptions. Description reinforces semantics by mentioning 'subnet' and 'P-Chain height' but doesn't add additional constraints, format details, or usage examples beyond what the schema already provides. Baseline 3 appropriate for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'Get' with specific resource 'validators and their weights' and context 'at a given P-Chain height'. Implicitly distinguishes from 'current' and 'pending' sibling validators tools via the height-specific phrasing, though explicit differentiation is absent.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage context via 'at a given P-Chain height', suggesting historical or specific-block queries. However, lacks explicit when-to-use guidance regarding siblings like platform_get_current_validators or platform_get_pending_validators, and doesn't clarify the 'proposed' height option's purpose.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.