GoTRON MCP Server
Server Details
TRON blockchain MCP server powered by the GoTRON SDK — query accounts, balances, TRC20 tokens, blocks, transactions, and smart contracts; build and sign transactions for transfers, staking, voting, and contract calls. Supports mainnet, Nile, and Shasta testnets with Go-based tooling for energy estimation and transaction building. Available via stdio or hosted Streamable HTTP
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
43 toolsanalyze_accountBDestructiveInspect
Comprehensive account overview: balance, resources, staking, voting, permissions, and delegations in a single call
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | TRON base58 address (starts with T) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description presents the tool as a read-only 'overview' operation, which directly contradicts the annotations declaring readOnlyHint=false and destructiveHint=true. It fails to explain why an account analysis would be destructive or state-changing, creating dangerous ambiguity about the tool's safety profile.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficiently structured sentence that front-loads the value proposition ('Comprehensive account overview') and immediately details the specific data points without filler. Every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking an output schema, the description compensates by enumerating the specific data categories returned. However, it is incomplete regarding the contradictory behavioral annotations (destructive vs. overview) and omits any mention of TRON-specific context that might explain the destructive hint.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With schema description coverage at 100%, the parameter 'address' is fully documented in the schema itself ('TRON base58 address'). The description adds no parameter-specific guidance, but the baseline score of 3 is appropriate since the schema carries the full semantic burden adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it provides a 'Comprehensive account overview' and explicitly lists the data categories returned (balance, resources, staking, voting, permissions, delegations). It effectively distinguishes itself from siblings like get_account or get_account_resources by emphasizing the 'single call' comprehensiveness versus specialized getters.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'in a single call' implies efficiency benefits over making multiple sibling calls, providing implied usage context. However, it lacks explicit guidance on when NOT to use this (e.g., if only one data point is needed) or direct comparisons to the lighter-weight alternatives in the sibling list.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
analyze_transfer_costCDestructiveInspect
Estimate the cost of a TRX or TRC20 transfer: energy, bandwidth, and TRX burn options
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | Recipient TRON address (base58) | |
| from | Yes | Sender TRON address (base58) | |
| amount | Yes | Amount to transfer (e.g., '100') | |
| contract_address | No | TRC20 contract address (omit for TRX transfer) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description contradicts the annotations by implying a safe, read-only estimation ('Estimate the cost') while annotations declare destructiveHint: true and readOnlyHint: false. The description fails to explain what gets destroyed or modified during estimation, why the tool is non-idempotent, or what the openWorldHint implies for the analysis scope.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the action ('Estimate') and immediately specifies the resource and scope. Every word serves a purpose with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the destructive annotation (which creates a safety ambiguity), the lack of output schema, and the complexity of blockchain cost estimation, the description is incomplete. It should clarify the return structure (e.g., whether it returns specific numeric values or options) and address the discrepancy with the destructive safety profile indicated in annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is met. The description mentions 'TRX or TRC20 transfer' which aligns with the optional contract_address parameter semantics, but this merely echoes the schema description without adding syntax details, validation rules, or format examples beyond what the structured schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool estimates costs for TRX/TRC20 transfers and identifies the specific cost components (energy, bandwidth, TRX burn). It implicitly distinguishes from sibling transfer tools (transfer_trx, transfer_trc20) by using 'Estimate' rather than an execution verb, though it could explicitly clarify this distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description lacks explicit guidance on when to use this tool versus sibling estimation tools like estimate_energy or estimate_trc20_energy. It does not state prerequisites (e.g., valid addresses) or clarify that this tool analyzes/plans rather than executes transfers.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
decode_abi_outputADestructiveInspect
Decode ABI-encoded output hex from a contract call. Handles return values, revert reasons (Error(string)), and panic codes (Panic(uint256)).
| Name | Required | Description | Default |
|---|---|---|---|
| data | Yes | Hex-encoded output bytes to decode (0x prefix optional) | |
| method | Yes | Method signature (e.g., 'balanceOf(address)') | |
| contract_address | Yes | Contract address (base58, starts with T; needed to fetch ABI for decoding) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Describes capability to handle revert reasons and panic codes (useful behavioral detail), but critically fails to explain why a decoder has destructiveHint=true, readOnlyHint=false, and idempotentHint=false. These annotations imply side effects or state changes that are highly unexpected for a decoding utility and require explanation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no redundancy. First sentence establishes core purpose; second sentence adds specific value (error handling). Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Mentions output varieties (returns, errors, panics) which partially compensates for missing output schema, but incomplete regarding destructive/openWorld implications. Given the counter-intuitive annotations, the description should clarify the operational impact.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage with clear descriptions (e.g., method signature example, contract address format). The description mentions 'hex' and 'contract call' which loosely map to parameters but do not augment the well-documented schema with additional semantic constraints or format details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action 'Decode' on specific resource 'ABI-encoded output hex'. Differentiates from sibling get_contract_abi (which retrieves the ABI) by specifying this decodes actual call results including error cases.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies post-call usage for decoding hex outputs, but lacks explicit guidance on when to use vs. alternatives (e.g., 'Use after trigger_contract to decode returned hex') or prerequisites. No mention of whether this is preferential over client-side decoding.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delegate_resourceADestructiveInspect
Delegate energy or bandwidth to another address (Stake 2.0). Returns unsigned transaction hex for signing.
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | Recipient address (base58, starts with T) | |
| from | Yes | Delegator address (base58, starts with T) | |
| memo | No | Optional memo to attach to the transaction | |
| amount | Yes | Amount in TRX to delegate (e.g., '100.5') | |
| resource | Yes | Resource type | |
| lock_period | No | Lock period in blocks (~3 seconds each). If set, delegation cannot be undone until the lock expires. | |
| permission_id | No | Permission ID for multi-sig transactions |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructive/write behavior which aligns with 'Delegate'. Description adds crucial behavioral context not in annotations: it returns 'unsigned transaction hex for signing' rather than executing immediately, and specifies 'Stake 2.0' protocol version. Could improve by mentioning that delegation locks resources or requires existing frozen balance.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. Front-loaded with action and resources. First sentence establishes operation and protocol context; second sentence clarifies critical return behavior. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Compensates well for missing output schema by documenting the return type (unsigned hex). Mentions Stake 2.0 for protocol context. Given the complex blockchain mutation (destructive, lock periods, multi-sig support), would benefit from mentioning prerequisites (frozen TRX required) or the separate signing/broadcast flow for full completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, providing detailed descriptions for all 7 parameters including lock period behavior and address formats. Description mentions 'energy or bandwidth' which maps to the resource enum, but adds minimal semantic detail beyond the comprehensive schema. Baseline 3 appropriate given schema completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Delegate' with clear resources 'energy or bandwidth' and target 'another address'. Distinguishes from siblings by specifying '(Stake 2.0)' protocol and unique return type 'unsigned transaction hex', clearly differentiating it from direct execution tools like transfer_trx or query tools like get_delegated_resources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage context via 'Stake 2.0' and clarifies the return type requires subsequent signing. However, lacks explicit guidance on prerequisites (requires frozen TRX balance first), does not reference inverse operation 'undelegate_resource', and omits when to prefer this over 'freeze_balance' (Stake 1.0).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
estimate_energyADestructiveInspect
Estimate energy cost for a smart contract call. Requires a full node with vm.supportConstant=true and vm.estimateEnergy=true; falls back to secondary node if primary does not support it.
| Name | Required | Description | Default |
|---|---|---|---|
| from | Yes | Caller address (base58, starts with T) | |
| method | Yes | Contract method signature (e.g., 'transfer(address,uint256)') | |
| params | Yes | Method parameters as JSON array. Plain values: ["TJD...", "1000"] (types inferred from method signature). Typed objects also accepted: [{"address": "TJD..."}, {"uint256": "1000"}] | |
| contract_address | Yes | Smart contract address (base58, starts with T) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds valuable operational context beyond annotations: specifies node requirements ('vm.supportConstant=true and vm.estimateEnergy=true') and failover logic ('falls back to secondary node'). Does not contradict annotations (destructiveHint=true is not denied by the description, though the description could clarify why an 'estimate' is destructive).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two dense sentences with zero waste. Front-loaded with purpose ('Estimate energy cost...'), followed by operational constraints and fallback behavior. Every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a 4-parameter tool with complete schema coverage, but lacks description of return values (no output schema exists to document this). The node configuration requirements partially compensate by covering critical operational context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents all 4 parameters including formats (base58, method signatures, JSON arrays). The description adds no parameter-specific guidance, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Estimate') and resource ('energy cost') for 'smart contract call' target. However, it does not explicitly differentiate from sibling tool 'estimate_trc20_energy', leaving ambiguity when to use the general vs. token-specific estimator.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit prerequisites ('Requires a full node with vm.supportConstant=true...'), indicating necessary node configuration. Lacks explicit when-to-use guidance versus siblings like 'trigger_constant_contract' or 'estimate_trc20_energy', and no warnings about when NOT to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
estimate_trc20_energyBDestructiveInspect
Estimate energy cost for a TRC20 transfer without creating a transaction. Dry-runs the transfer to check energy requirements.
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | Recipient address (base58, starts with T) | |
| from | Yes | Sender address (base58, starts with T) | |
| amount | Yes | Amount in human-readable units (e.g., '100.5' for 100.5 USDT) | |
| contract_address | Yes | TRC20 contract address (base58, starts with T) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description claims this is a 'dry-run' that operates 'without creating a transaction', implying a safe, read-only simulation. This directly contradicts the annotations which specify 'destructiveHint: true' and 'readOnlyHint: false', suggesting state-changing or destructive behavior. This inconsistency makes the tool's actual behavior opaque and potentially dangerous to invoke.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The two-sentence structure is appropriately sized and front-loaded with the core purpose. The second sentence ('Dry-runs the transfer...') is slightly redundant with the first but reinforces the non-transactional nature. No unnecessary fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 100% input schema coverage, the description adequately covers inputs. However, lacking an output schema, it omits what the tool returns (e.g., energy units format, numeric type). The annotation contradiction also creates a completeness gap regarding behavioral safety.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the parameters are fully documented in the schema (address formats, amount units). The description adds no additional semantic context (e.g., validation rules, format examples, or constraints), meeting the baseline expectation when the schema carries the descriptive burden.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (estimate/dry-run energy cost), the target resource (TRC20 transfer), and scope (without creating a transaction). It effectively distinguishes from siblings like 'transfer_trc20' (which creates transactions) and 'estimate_energy' (likely for TRX native transfers) via explicit TRC20 scoping.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'without creating a transaction' provides implicit guidance on when to use this versus 'transfer_trc20'. However, it does not explicitly contrast with 'estimate_energy' (the generic energy estimator) or clarify prerequisites like address validation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
freeze_balanceADestructiveInspect
Stake TRX for energy or bandwidth (Stake 2.0). Returns unsigned transaction hex for signing.
| Name | Required | Description | Default |
|---|---|---|---|
| from | Yes | Account address (base58, starts with T) | |
| memo | No | Optional memo to attach to the transaction | |
| amount | Yes | Amount in TRX to stake (e.g., '100.5') | |
| resource | Yes | Resource type | |
| permission_id | No | Permission ID for multi-sig transactions |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true (state-changing), but the description adds crucial behavioral context: it returns an 'unsigned transaction hex for signing' rather than broadcasting directly, and clarifies this implements the 'Stake 2.0' protocol. It could improve by mentioning funds become locked/illiquid.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences with zero redundancy. First sentence establishes the operation and resource type; second sentence clarifies the return format. No filler text and appropriately front-loaded with the action verb.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Good coverage given no output schema exists: the description explicitly states the return format (unsigned hex). Destructive nature is flagged in annotations. Minor gap: could mention the relationship to unfreeze_balance for unstaking workflow, but sufficient for invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description maps the general concept ('Stake TRX for energy or bandwidth') to the specific enum values in the schema, but does not add syntax details, validation rules, or parameter relationships beyond what the schema already documents.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the specific action 'Stake TRX', the target resources 'energy or bandwidth', and explicitly identifies this as 'Stake 2.0' to distinguish from legacy staking mechanisms. It effectively clarifies that the tool name 'freeze_balance' refers to the modern staking protocol.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While it explains what the tool does (staking), it provides no guidance on when to use this versus sibling tools like delegate_resource (staking for others) or unfreeze_balance (unstaking), nor does it mention that staked funds become locked until withdrawn via the unfreeze process.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_accountCDestructiveInspect
Get TRON account balance and details including TRX balance, bandwidth, energy, frozen resources, and account type
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | TRON base58 address (starts with T) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description contradicts annotations: uses 'Get' implying a read-only operation, but annotations specify readOnlyHint=false and destructiveHint=true. If this operation actually modifies state (e.g., activating an inactive account), the description fails to disclose this critical behavioral trait. No disclosure of side effects, authentication requirements, or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence efficiently structures the information with the action first ('Get'), followed by the resource, then specific details included parenthetically. No wasted words, front-loaded with the essential verb.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Without an output schema, the description partially compensates by listing returned fields (TRX balance, bandwidth, energy, etc.), which is helpful. However, it omits critical behavioral context regarding the destructive nature flagged in annotations, and lacks pagination or error handling details despite openWorldHint suggesting external address lookup.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage for the single 'address' parameter, the schema adequately documents inputs. The description adds no parameter-specific semantics, but baseline 3 is appropriate given complete schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Get' and resource 'TRON account', and lists specific return details (TRX balance, bandwidth, energy, frozen resources, account type). However, it fails to differentiate from siblings like 'get_account_resources' or 'analyze_account', leaving ambiguity about which tool to use for detailed resource analysis.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance provided on when to use this tool versus the numerous siblings (get_account_resources, analyze_account, etc.). No mention of prerequisites or that the address must be activated/valid, despite openWorldHint=true suggesting it may encounter unknown addresses.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_account_permissionsCDestructiveInspect
Get the permission structure of a TRON account including owner, active, and witness permissions with multi-sig keys and thresholds
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | TRON base58 address (starts with T) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Critical contradiction: the description uses 'Get' (implying read-only retrieval), but annotations specify readOnlyHint=false and destructiveHint=true. The description fails to explain what gets destroyed, why a permission query is destructive, or what side effects occur.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single dense sentence efficiently enumerates the specific permission components returned (owner, active, witness, multi-sig keys, thresholds) without filler words. Information is front-loaded and every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the description adequately outlines the returned permission structure (compensating for the missing output schema), it completely omits behavioral context required for a destructive tool—no explanation of state changes, irreversible effects, or why 'getting' permissions modifies state.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with the 'address' parameter fully documented in the schema. The description doesn't add syntax details or examples beyond the schema's 'TRON base58 address' specification, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves the permission structure of a TRON account, specifying distinct permission types (owner, active, witness, multi-sig keys, thresholds) that differentiate it from the generic 'get_account' sibling. However, the 'Get' verb creates friction with the destructive annotations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus 'get_account' or other account-inspection tools. Missing prerequisites, exclusion criteria, or decision logic for selecting this specific permission-focused endpoint.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_account_resourcesCDestructiveInspect
Get energy and bandwidth usage and limits for a TRON account
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | TRON base58 address (starts with T) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description uses 'Get' implying a safe read operation, which directly contradicts annotations declaring 'destructiveHint: true' and 'readOnlyHint: false'. It fails to disclose what gets destroyed, why it is non-idempotent, or the implications of 'openWorldHint'. This is a serious inconsistency that could lead to unsafe invocations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence efficiently front-loaded with resource types. No filler words. However, given the behavioral complexity indicated by annotations, the extreme brevity becomes a liability rather than a virtue, warranting a slight deduction.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Fails to address the destructive nature indicated by annotations, provide output format details, or explain what 'usage and limits' entails. Simple parameter schema does not compensate for the behavioral opacity caused by the annotation/description mismatch.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with the address parameter fully documented ('TRON base58 address'). The description mentions no parameters, but with complete schema coverage, the baseline score of 3 is appropriate as the description adds no semantic value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific about retrieving 'energy and bandwidth usage and limits' for a TRON account, with clear resource targets. However, it does not explicitly differentiate from sibling 'get_account' (general info) or 'get_delegated_resources' (delegated specifically), and the phrasing implies a read operation that conflicts with the destructive annotations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus 'get_account' (which likely returns general account data) or other resource-related tools. No prerequisites or conditions mentioned despite the destructive annotations suggesting side effects.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_bandwidth_pricesCDestructiveInspect
Get current and historical bandwidth prices on the TRON network
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description completely fails to disclose behavioral traits indicated by annotations: destructiveHint=true (nothing mentioned about what gets destroyed), readOnlyHint=false (no explanation of what state changes occur), and openWorldHint=true (no mention of external effects). The 'Get' framing contradicts the non-read-only, destructive nature of the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence format is appropriately concise and front-loaded. Minor deduction as 'on the TRON network' may be redundant given the server context, but overall efficient structure.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the heavy annotation burden (destructive, openWorld, non-idempotent) and lack of output schema, the description is dangerously incomplete. It offers no explanation for why getting prices would be destructive, what gets modified, or what return values to expect.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters, which establishes a baseline score of 4 per the evaluation rules. The description appropriately omits parameter discussion since none exist.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool retrieves 'current and historical bandwidth prices' on the TRON network, identifying the resource and scope. However, the verb 'Get' strongly implies a read-only operation, which contradicts the provided annotations (readOnlyHint=false, destructiveHint=true), creating ambiguity about the tool's actual purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus siblings like 'get_energy_prices' or 'get_account_resources'. No mention of prerequisites, costs, or conditions despite annotations suggesting destructive, open-world behavior that would normally require warnings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_blockADestructiveInspect
Get a TRON block by number or latest. Use include_transactions to get transaction details with decoded contract data (sender, receiver, amounts) and pagination.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max transactions to return when include_transactions is true (default: 50) | |
| offset | No | Skip first N transactions (default: 0, for pagination) | |
| block_number | No | Block number (omit for latest) | |
| transaction_type | No | Filter transactions by type (e.g., 'TransferContract', 'TriggerSmartContract') | |
| include_transactions | No | Include decoded transaction details with contract data — sender, receiver, amounts (default: false) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description frames the operation as a simple 'Get' retrieval, implying a safe read operation, but this directly contradicts the annotations which specify readOnlyHint=false and destructiveHint=true without explaining what gets destroyed or why it's not read-only.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences with no waste. The first states purpose, the second explains key optional functionality. Every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately covers the primary use case for a 5-parameter tool, but fails to explain the destructive/non-read-only nature indicated by annotations or provide return value details (though none are required without output schema).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description adds valuable semantic context by specifying what decoded contract data includes (sender, receiver, amounts) and connecting pagination to the include_transactions parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool retrieves TRON blocks and specifies identification methods (by number or latest), distinguishing it from sibling transaction and account tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage guidance by explaining the include_transactions flag and pagination, but lacks explicit comparison to alternatives like get_transaction or when to filter by transaction_type.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_chain_parametersCDestructiveInspect
Get TRON network node info and current energy/bandwidth prices
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description contradicts the annotations. The verb 'Get' implies a read-only operation, but annotations specify readOnlyHint: false and destructiveHint: true, indicating this operation is destructive. The description fails to disclose what gets destroyed, why it is destructive, or what side effects occur.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with no wasted words. It is appropriately sized and front-loaded with the action verb.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the destructiveHint annotation and lack of output schema, the description should explain what resources are consumed or destroyed, why this 'get' operation is destructive, and what the caller receives. It provides none of this critical behavioral context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters. According to scoring rules, 0 parameters establishes a baseline score of 4. The description appropriately requires no parameter explanation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses the specific verb 'Get' and clearly identifies the resources as 'TRON network node info and current energy/bandwidth prices'. However, it does not distinguish from siblings like get_network, get_energy_prices, or get_bandwidth_prices, which appear to offer overlapping functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives such as get_energy_prices or get_network. There are no stated prerequisites, conditions, or exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_contract_abiADestructiveInspect
Get the ABI of a smart contract on TRON. Automatically resolves proxy contracts (ERC-1967). Note: many TRON contracts are deployed without on-chain ABI — if empty, check TronScan for verified source code.
| Name | Required | Description | Default |
|---|---|---|---|
| contract_address | Yes | Smart contract address (base58, starts with T) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description states 'Get the ABI' implying a safe read operation, but annotations declare readOnlyHint=false and destructiveHint=true, indicating destructive mutation. This serious contradiction misrepresents the tool's safety profile and risks incorrect agent invocation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences efficiently structured: purpose declaration, behavioral feature (proxy resolution), and practical limitation note. Every sentence adds distinct value with no redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers proxy contract resolution behavior and empty result scenarios well. As no output schema exists, could benefit from describing the ABI return format, but adequately covers behavioral complexities for this single-parameter tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage for the single parameter ('Smart contract address (base58, starts with T)'). Description does not need to add redundant parameter semantics, meets baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb 'Get' with resource 'ABI of a smart contract on TRON'. Distinguishes from siblings like list_contract_methods or trigger_contract by specifying full ABI retrieval and unique proxy resolution capability (ERC-1967).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides practical guidance on handling empty results ('if empty, check TronScan') which indicates a limitation and fallback workflow. Lacks explicit comparison to siblings (e.g., when to use vs list_contract_methods).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_contract_eventsADestructiveInspect
Get decoded events emitted by a smart contract via TronGrid REST API. Returns a compact summary per event. Use a small limit and paginate with fingerprint to control response size
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of results to return (1-200, default 10). Start small to avoid large responses; use fingerprint to paginate for more | |
| event_name | No | Filter by specific event name (e.g. Transfer, Approval) | |
| fingerprint | No | Pagination cursor from previous response meta.fingerprint — pass to fetch the next page | |
| max_timestamp | No | Maximum block timestamp in milliseconds | |
| min_timestamp | No | Minimum block timestamp in milliseconds | |
| only_confirmed | No | Only return confirmed transactions | |
| contract_address | Yes | TRON smart contract base58 address (starts with T) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description presents this as a read operation ('Get decoded events'), but the annotations declare destructiveHint=true and readOnlyHint=false, suggesting state mutation. This is a serious contradiction—the description implies safe data retrieval while annotations warn of destructive behavior. The description fails to explain why a 'get' operation would be destructive or non-idempotent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three efficient sentences with zero redundancy: purpose declaration, return value description, and operational guidance. Information is front-loaded with the core verb and resource, making it immediately scannable.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a query tool with 7 parameters and no output schema, the description compensates well by mentioning the external API source (TronGrid), hinting at output format ('compact summary'), and explaining pagination strategy. It appropriately omits parameter enumeration since the schema has full coverage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 100% schema coverage (baseline 3), the description adds valuable usage patterns by connecting the limit and fingerprint parameters ('Use a small limit and paginate with fingerprint'), teaching the agent how to use these together for pagination. It also adds context that the response is a 'compact summary'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get decoded events'), the target resource ('emitted by a smart contract'), and the mechanism ('via TronGrid REST API'). It effectively distinguishes from siblings like get_contract_abi (interface inspection) and trigger_contract (execution) by focusing on historical event logs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides useful implementation guidance for pagination ('Use a small limit and paginate with fingerprint'), implying when to use these parameters. However, lacks explicit comparison to sibling tools—no guidance on when to query events versus using get_transaction_history or trigger_constant_contract for state checks.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_delegatable_amountADestructiveInspect
Get the maximum amount of energy or bandwidth an account can still delegate
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | TRON base58 address (starts with T) | |
| resource | Yes | Resource type |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true and readOnlyHint=false, suggesting state changes or resource consumption, yet the description uses the safe-sounding verb 'Get' without explaining why retrieval is destructive or what side effects occur. This is a significant safety gap given the 'get_' naming convention implies read-only safety.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with zero waste, front-loaded with the core action and resource type. Every word earns its place in defining scope and purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Core purpose is covered and schema is complete, but lacks explanation of the destructive behavior flagged in annotations and provides no indication of return value format (amount in what unit? raw or formatted?) despite having no output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage ('TRON base58 address', 'Resource type'), the description appropriately reinforces the parameter mapping ('account'→address, 'energy or bandwidth'→resource) without needing to duplicate detailed syntax already provided in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Get' with clear resource scope ('maximum amount of energy or bandwidth') and context ('an account can still delegate'), distinguishing it from sibling 'delegate_resource' which performs the action rather than querying limits.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Usage is implied (checking delegation limits), but lacks explicit guidance on when to use versus alternatives like 'get_account_resources' or prerequisite steps before calling 'delegate_resource'. No 'when-not-to-use' guidance provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_delegated_resourcesCDestructiveInspect
Get resources delegated to and from an account (Stake 2.0 delegation info)
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | TRON base58 address (starts with T) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description claims this is a 'Get' operation (implying data retrieval), but the annotations indicate destructiveHint: true and readOnlyHint: false, suggesting this performs a destructive write operation. This is a serious contradiction. The description fails to explain what gets destroyed, why a 'get' is destructive, or the implications of openWorldHint/idempotentHint being false.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single efficient sentence with parenthetical context, avoiding verbosity. However, given the complexity of Stake 2.0 delegation and the contradictory annotations, this brevity is insufficient—the description needs additional sentences to clarify behavioral traits.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema and contradictory annotations indicating destructive behavior, the description is dangerously incomplete. It fails to explain return values, side effects, or reconcile why a 'get' operation is marked destructive. The 'Stake 2.0' mention provides some domain context but is inadequate for safe invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for the 'address' parameter ('TRON base58 address'), so the description is not required to add parameter semantics. The description adds no additional parameter context, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states it retrieves 'resources delegated to and from an account' and specifies 'Stake 2.0 delegation info', which distinguishes it from generic account resource queries like get_account_resources. However, the term 'Get' is ambiguous given the destructive annotations—it could mean 'retrieve data' or 'claim/withdraw resources'—creating uncertainty about the actual operation type.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus delegate_resource, undelegate_resource, or get_delegatable_amount. Given the destructive annotation, it critically fails to warn users about when this should be invoked versus safer query alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_energy_pricesCDestructiveInspect
Get current and historical energy prices on the TRON network
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description uses the verb 'Get' implying a read-only operation, but annotations declare `readOnlyHint: false` and `destructiveHint: true`, suggesting state modification or costs. This is a direct contradiction, and the description fails to explain why retrieving prices would be destructive or require write permissions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single sentence efficiently conveys the essential function without redundancy. However, given the destructive annotation and lack of output schema, additional context was necessary to explain behavioral characteristics and return data.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the description covers the basic resource being accessed, it fails to address the conflicting annotation hints (destructive/read-only status) and provides no indication of what data structure is returned, leaving significant behavioral ambiguity for a tool with surprising safety properties.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With zero parameters required in the input schema, the baseline applies. The description appropriately does not invent parameter semantics where none exist, correctly implying this is a parameterless query operation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the resource (energy prices) and scope (TRON network, current/historical) using a specific verb. However, it does not explicitly differentiate from the similar sibling tool `get_bandwidth_prices` or clarify what 'energy' means in this blockchain context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like `get_bandwidth_prices`, nor does it explain when to query current versus historical prices. There are no stated prerequisites or conditions for invocation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_networkCDestructiveInspect
Get current MCP server connection info: network name, node address, and latest block
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description claims a simple 'Get' operation while annotations declare destructiveHint: true and readOnlyHint: false. This is a critical contradiction—the description fails to disclose what gets destroyed or why retrieving network info requires write permissions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single efficient sentence with action verb front-loaded. No wasted words, appropriately sized for the apparent simplicity (though annotations suggest hidden complexity).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given destructive and non-readonly annotations, the description completely omits critical safety context (what state changes, what gets destroyed, side effects). It only documents the apparent return value intent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters (empty object), so baseline 4 applies per rubric. Description does not need to compensate for missing parameter docs.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states a clear retrieval operation with specific fields (network name, node address, latest block), but the contradiction with annotations (destructiveHint: true) creates ambiguity about whether this is actually a safe read operation or something more dangerous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use versus alternatives like get_block or get_chain_parameters, nor does it explain why a getter tool has destructive annotations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_pending_by_addressCDestructiveInspect
Get pending transactions for a specific address from the mempool
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | TRON address (base58, starts with T) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Fails to explain the significant behavioral traits indicated by annotations: specifically, it presents the tool as a standard 'Get' operation while annotations declare 'destructiveHint: true' and 'readOnlyHint: false'. This creates a dangerous contradiction by implying a safe read when the tool may have destructive side effects. Adds no context about what gets destroyed or why.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence that is appropriately front-loaded. No redundant words. Efficiently conveys the core operation without waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Lacks explanation of return values (no output schema exists to provide this). Fails to address the confusing destructive/non-readonly annotations which are critical for safe invocation. Does not define 'mempool' for agents unfamiliar with blockchain terminology.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the schema fully documents the 'address' parameter format ('TRON address (base58, starts with T)'). The description aligns with this by mentioning 'specific address' but does not add supplementary semantic information (e.g., whether the address must be the sender, recipient, or either). Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States a specific action ('Get') and resource ('pending transactions') with scope ('for a specific address from the mempool'). Distinguishes from sibling 'get_pending_transactions' by specifying address-based filtering. However, presents the operation as a simple retrieval without hinting at the unusual destructive annotation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus the sibling 'get_pending_transactions' or other query tools like 'get_account'. No prerequisites or conditions mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_pending_transactionsBDestructiveInspect
List pending transaction IDs and pool size from the mempool
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max transaction IDs to return (default: 10) | |
| offset | No | Skip first N transaction IDs (default: 0, for pagination) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description uses 'List' implying a read-only operation, but annotations declare `readOnlyHint: false` and `destructiveHint: true`. This is a flagrant contradiction—the description gives no indication of destructive behavior or state mutation that the annotations suggest.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, nine words. Verb-first structure immediately communicates action and resource. No redundant or filler content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Specifies return values (IDs and pool size) compensating for missing output schema. However, fails to address the contradictory annotations regarding safety/destructiveness, leaving operational context ambiguous.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (limit and offset fully documented). The description adds no parameter details, but with complete schema coverage, no additional description is necessary. Baseline 3 appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('List') and resource ('pending transaction IDs and pool size') with source ('mempool'). Distinguishes implicitly from sibling `get_pending_by_address` by specifying 'from the mempool' (global scope), but does not explicitly differentiate from `is_transaction_pending`.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or alternatives named. Usage must be inferred from the name and 'from the mempool' phrasing, suggesting this is for unfiltered global queries versus filtered alternatives like `get_pending_by_address`.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_proposalCDestructiveInspect
Get full details of a governance proposal by ID, including the complete list of approval addresses.
| Name | Required | Description | Default |
|---|---|---|---|
| proposal_id | Yes | Proposal ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description claims a read operation ('Get') but annotations indicate destructiveHint: true and readOnlyHint: false. This is a serious contradiction—the description fails to disclose what state is being destroyed or why a 'get' operation is non-idempotent and destructive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single 14-word sentence that efficiently communicates the action, resource, filter criteria, and specific included data ('complete list of approval addresses'). No redundant words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool marked destructive and non-idempotent, the description is dangerously incomplete. It fails to explain what side effects occur, what gets destroyed, or why this 'get' operation modifies state. The lack of output schema also isn't addressed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage and only one parameter (proposal_id), the schema adequately documents the input. The description references 'by ID' aligning with the parameter but adds no additional semantic context (e.g., ID format, valid ranges, where to find IDs).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves 'full details of a governance proposal' and distinguishes from list_proposals via 'by ID'. However, given the destructive annotations, the stated purpose may be misleading about the actual behavior.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'by ID' implies usage when a specific proposal identifier is known, distinguishing it from list_proposals. However, it lacks explicit guidance on when to use this versus list_proposals or how to obtain proposal IDs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_token_priceBDestructiveInspect
Get the current USD price of TRX or a TRC20 token. Uses CoinGecko with caching.
| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | Token to price: 'TRX' or a TRC20 contract address (e.g., TR7NHqjeKQxGTCi8q8ZY4pL8otSzgjLj6t for USDT) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description states 'Get' implying a read-only operation, but annotations declare destructiveHint: true and readOnlyHint: false, which is a serious contradiction. While it adds context about CoinGecko and caching, the safety profile is completely misaligned with the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficiently structured sentences with zero waste. The first states purpose, the second provides implementation context (CoinGecko/caching). Information is front-loaded appropriately for quick comprehension.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with complete schema coverage, the description is adequate, though the annotation contradiction creates a significant gap regarding behavioral safety. No output schema exists, but the return value (price) is implicit in the description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents the 'token' parameter including valid formats (TRX string or contract address) and examples. The description adds no parameter-specific semantics, meeting the baseline expectation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and resource ('USD price of TRX or a TRC20 token') that clearly distinguishes this from sibling blockchain operation tools like transfer_trx or get_account. It precisely defines the scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by specifying TRX/TRC20 tokens, but provides no explicit 'when to use' guidance, prerequisites (like network connectivity for CoinGecko), or named alternatives. Usage is implied by being the only price tool among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_transactionCDestructiveInspect
Get transaction details by transaction ID
| Name | Required | Description | Default |
|---|---|---|---|
| transaction_id | Yes | Transaction hash / txid (64-char hex string) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description implies a read-only operation ('Get'), but annotations declare destructiveHint=true and readOnlyHint=false, indicating a destructive or state-modifying operation. This is a direct contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise at six words with zero redundancy. Every word conveys essential information about the operation and target resource.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the destructive annotation and lack of output schema, the description is insufficient. It fails to explain what details are returned, why the operation is marked destructive, or how it differs from the many sibling transaction tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents the transaction_id format (64-char hex string). The description adds no additional semantic details, syntax, or validation rules beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and resource 'transaction details', specifying the lookup mechanism 'by transaction ID'. However, it does not distinguish from siblings like get_transaction_history or is_transaction_pending.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives (e.g., get_transaction_history for ranges, is_transaction_pending for status checks) or any prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_transaction_historyBDestructiveInspect
Get transaction history for a TRON address via TronGrid REST API. Returns a compact summary per transaction. Use a small limit and paginate with fingerprint to control response size
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of results to return (1-200, default 10). Start small to avoid large responses; use fingerprint to paginate for more | |
| address | Yes | TRON base58 address (starts with T) | |
| only_to | No | Only show incoming transactions | |
| only_from | No | Only show outgoing transactions | |
| fingerprint | No | Pagination cursor from previous response meta.fingerprint — pass to fetch the next page | |
| max_timestamp | No | Maximum block timestamp in milliseconds | |
| min_timestamp | No | Minimum block timestamp in milliseconds | |
| only_confirmed | No | Only return confirmed transactions |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description states 'Get transaction history' implying a safe read operation, but annotations declare 'readOnlyHint: false' and 'destructiveHint: true'. This is a direct contradiction—the description suggests querying while annotations warn of destructive side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three well-structured sentences: purpose first, output format second, operational guidance third. No redundant text; every sentence adds distinct information not found in structured fields.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for an 8-parameter tool with no output schema—mentions data source (TronGrid), response nature (compact summary), and pagination pattern. However, the annotation contradiction creates ambiguity about actual behavior, and without output schema, the 'compact summary' detail is insufficient to predict return structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, baseline is 3. The description adds value by explaining the pagination strategy ('Use a small limit... paginate with fingerprint') which clarifies the relationship between 'limit' and 'fingerprint' parameters beyond individual schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb (Get), resource (transaction history), and scope (TRON address via TronGrid). Mentions 'compact summary' clarifying output format. Does not explicitly differentiate from sibling 'get_transaction' (single tx lookup) or 'get_trc20_transfers' (token-specific history), though 'via TronGrid' provides some context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides operational guidance ('Use a small limit and paginate with fingerprint to control response size') indicating how to handle large result sets. However, lacks explicit when-to-use guidance versus alternatives like 'get_transaction' or 'get_trc20_transfers'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_trc20_allowanceBDestructiveInspect
Query how many tokens a spender is approved to transfer on behalf of the owner. Flags unlimited approvals as high risk.
| Name | Required | Description | Default |
|---|---|---|---|
| owner | Yes | Token owner address (base58) | |
| spender | Yes | Approved spender address (base58) | |
| contract_address | Yes | TRC20 token contract address |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description states 'Query,' implying a read-only operation, but annotations specify readOnlyHint: false and destructiveHint: true, indicating state mutation. This direct contradiction creates dangerous ambiguity about whether the tool modifies blockchain state or internal logs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences efficiently deliver the core purpose and a critical behavioral feature (risk flagging). Every word earns its place with no redundancy or filler content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 100% input schema coverage, the description adequately covers inputs, but fails to reconcile the destructive annotation with the query description. It mentions risk flagging without explaining the threshold for 'high risk' or the return format for such flags.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the structured data already fully documents the owner, spender, and contract_address parameters. The description adds minimal semantic value beyond the schema, though it contextualizes the owner-spender relationship effectively.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (Query) and resource (token allowance amount), defining the exact relationship between owner and spender. It distinguishes from siblings like get_trc20_balance (holdings) and transfer_trc20 (execution) by focusing specifically on approval limits.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies a security use case through 'Flags unlimited approvals as high risk,' suggesting when to use it (risk assessment). However, it lacks explicit guidance on next steps (e.g., using revoke_approval to mitigate risks) or contrasts with similar query tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_trc20_balanceCDestructiveInspect
Get TRC20 token balance for an account
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | TRON base58 address | |
| contract_address | Yes | TRC20 contract address |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description claims a simple 'Get' operation implying read-only behavior, but annotations declare readOnlyHint=false and destructiveHint=true. This is a serious contradiction requiring explanation—if fetching a balance consumes resources or modifies state (perhaps by triggering a refresh), the description must disclose this rather than presenting it as a safe query.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single sentence is efficiently worded with no redundancy. However, given the contradictory destructive annotations and lack of output schema, the extreme brevity becomes a liability rather than a virtue—additional sentences explaining side effects would earn their place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool annotated as destructive with no output schema, the description fails to explain what side effects occur, what the return value looks like, or why a balance check would modify state. The behavioral gap makes this incomplete despite adequate parameter coverage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage ('TRON base58 address', 'TRC20 contract address'), the schema already fully documents parameters. The description implicitly maps 'account' to the address parameter but adds no syntax details, format examples, or semantic clarification beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Get' with resource 'TRC20 token balance' and scope 'for an account', clearly identifying the operation. However, it does not explicitly distinguish from siblings like get_trc20_allowance (which retrieves approved spending limits, not balances) or address why this 'get' operation is marked destructive.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like get_account (for general balances) or get_trc20_token_info (for metadata). No prerequisites, error conditions, or 'when-not-to-use' advice is included.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_trc20_token_infoCDestructiveInspect
Get TRC20 token metadata (name, symbol, decimals, total supply)
| Name | Required | Description | Default |
|---|---|---|---|
| contract_address | Yes | TRC20 contract address |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Severe contradiction with annotations. Description uses 'Get' implying a safe read operation, but annotations declare readOnlyHint=false and destructiveHint=true. The description fails to disclose what gets destroyed, why this is not read-only, or any other behavioral traits (rate limits, auth requirements) that would explain the destructive annotation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with the action and immediately followed by the specific fields returned. No wasted words or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While it lists return fields, it lacks critical behavioral context given the destructive annotation, provides no output schema details, and fails to explain why a 'get' operation would be destructive or non-idempotent. Given the contradiction and missing safety information, completeness is poor.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with 'contract_address' already described as 'TRC20 contract address'. The description provides no additional parameter semantics (format examples, validation rules), so it earns the baseline score of 3 for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action ('Get') and resource ('TRC20 token metadata'), and lists specific return fields (name, symbol, decimals, total supply) which distinguishes it from siblings like get_trc20_balance or get_token_price. However, it does not explicitly differentiate from these siblings in the text.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus alternatives such as get_token_price or get_contract_abi. No mention of prerequisites or preconditions for calling this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_trc20_transfersADestructiveInspect
Get TRC20 token transfer history for a TRON address via TronGrid REST API. Returns a compact summary per transfer with token info. Use a small limit and paginate with fingerprint to control response size
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of results to return (1-200, default 10). Start small to avoid large responses; use fingerprint to paginate for more | |
| address | Yes | TRON base58 address (starts with T) | |
| only_to | No | Only show incoming transfers | |
| only_from | No | Only show outgoing transfers | |
| fingerprint | No | Pagination cursor from previous response meta.fingerprint — pass to fetch the next page | |
| max_timestamp | No | Maximum block timestamp in milliseconds | |
| min_timestamp | No | Minimum block timestamp in milliseconds | |
| only_confirmed | No | Only return confirmed transactions |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description claims a read operation ('Get', 'Returns') suggesting safe data retrieval, but the annotations declare readOnlyHint: false and destructiveHint: true, indicating state modification or destructive behavior. This is a serious contradiction that could mislead the agent about side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences efficiently structured: purpose declaration, return format clarification, and pagination operational guidance. Every sentence earns its place with no redundancy or filler content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking an output schema, the description discloses the return format ('compact summary per transfer with token info') and external API dependency ('TronGrid'). With 8 fully-documented parameters, this provides sufficient context for invocation, though the annotation contradiction creates uncertainty about behavioral completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description adds valuable usage semantics by advising 'small limit' and explaining the pagination pattern ('paginate with fingerprint'), which helps the agent understand the interaction model beyond the schema's technical definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Get') and resource ('TRC20 token transfer history') scoped to 'TRON address via TronGrid REST API', clearly distinguishing it from siblings like transfer_trc20 (which performs transfers) or get_trc20_balance (which retrieves balances).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides operational guidance ('Use a small limit and paginate with fingerprint') implying this should be used when querying potentially large datasets, but it does not explicitly differentiate when to use this versus get_transaction_history or other history-retrieval siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
is_transaction_pendingCDestructiveInspect
Check if a specific transaction is still in the pending pool (mempool)
| Name | Required | Description | Default |
|---|---|---|---|
| transaction_id | Yes | Transaction hash / txid (64-char hex string) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description uses the word 'Check,' implying a safe, read-only query. However, the annotations declare `destructiveHint: true` and `readOnlyHint: false`, suggesting this operation modifies or destroys state. This is a severe contradiction that undermines agent trust and safety decisions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence with zero waste. Every word earns its place by defining the operation, target resource, and location (mempool).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having no output schema, the description fails to indicate what the tool returns (boolean, status object, or presence indicator). Combined with the annotation contradiction and lack of usage guidance, the description is insufficient for safe invocation without additional clarification.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage for the single `transaction_id` parameter (including its 64-char hex format), the description meets the baseline. It references 'specific transaction' but adds no semantic details beyond what the schema already provides (e.g., no examples, no validation rules).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool checks if a specific transaction remains in the pending pool/mempool, using a specific verb and resource. However, it could better distinguish from sibling `get_transaction` (which likely retrieves confirmed transaction details) by explicitly noting this queries unconfirmed/mempool state only.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like `get_pending_transactions` (list view), `get_pending_by_address` (address-filtered), or `get_transaction` (confirmed transactions). It omits prerequisites or timing considerations (e.g., when to poll).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_contract_methodsBDestructiveInspect
Get a human-readable summary of a smart contract's methods with signatures, inputs, outputs, and mutability. Auto-resolves proxy contracts.
| Name | Required | Description | Default |
|---|---|---|---|
| contract_address | Yes | Smart contract address (base58, starts with T) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
CRITICAL CONTRADICTION: Description frames this as a safe read operation ('Get... summary'), while annotations declare destructiveHint: true and readOnlyHint: false. The agent cannot determine if this modifies state or costs resources. Only partial credit for mentioning proxy auto-resolution.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two dense, high-value sentences. First covers core functionality and return structure; second adds the proxy behavior. Zero redundancy, well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite describing the output format well (compensating for missing output schema), the annotation contradiction creates a critical safety gap. Does not address why a 'list' operation would be destructive or what resources are at risk.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with the single parameter fully documented (type, format, prefix requirement). Description adds no parameter details, which is acceptable given the complete schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: states the verb (Get), resource (smart contract methods), and detailed output format (signatures, inputs, outputs, mutability). The proxy auto-resolution clause further distinguishes this from generic ABI fetchers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus sibling `get_contract_abi` or `trigger_constant_contract`, nor any mention of preconditions (e.g., contract must be verified/accessible).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_proposalsADestructiveInspect
List governance proposals on the TRON network with pagination support. Returns a compact summary per proposal (use get_proposal for full details including approval addresses). Newest first by default.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max proposals to return (default: 5) | |
| order | No | Sort order by proposal ID: 'desc' (default, newest first) or 'asc' (oldest first) | |
| offset | No | Skip first N proposals (default: 0, for pagination) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds useful behavioral context that annotations lack: explains return format is 'compact summary' (vs full), and clarifies default sort order ('Newest first'). However, fails to explain the anomalous destructiveHint=true and readOnlyHint=false annotations, which are counterintuitive for a list operation and deserve clarification.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three short sentences, zero waste. First establishes purpose and feature, second distinguishes sibling and return format, third states default ordering. Front-loaded with essential info.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately complete for a listing operation: covers resource, pagination, sorting defaults, and sibling relationship. Mentions return is 'compact summary' compensating somewhat for lack of output schema. Could improve by listing specific fields returned in the compact view.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, baseline is 3. Description adds semantic value by explicitly linking limit/offset to 'pagination support' and reinforcing the default 'desc' (newest first) behavior mentioned in the order parameter description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'List' + specific resource 'governance proposals on the TRON network'. Clear scope includes pagination. Explicitly distinguishes from sibling 'get_proposal' by contrasting 'compact summary' vs 'full details'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly directs to sibling 'get_proposal for full details including approval addresses', establishing clear when-to-use/when-not-to-use guidance. Also implies pagination use case via 'offset' and 'limit' context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_witnessesCDestructiveInspect
List super representatives (witnesses) on the TRON network with pagination support.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max witnesses to return (default: 5) | |
| offset | No | Skip first N witnesses (default: 0, for pagination) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description presents the tool as a standard 'List' operation, which semantically implies a safe, read-only query. This directly contradicts the annotations which declare readOnlyHint: false and destructiveHint: true. The description fails to explain why listing witnesses would be destructive or have side effects, creating a dangerous mismatch between expectations and actual behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence of nine words. It front-loads the verb and resource, avoids tautology, and every word earns its place. No restructuring or shortening would improve clarity without losing information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description should ideally describe the witness data structure returned. More critically, it completely omits any explanation of why this 'list' operation is marked as destructive and non-read-only in the annotations, leaving a critical behavioral gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the parameters (limit and offset) are fully documented in the schema itself. The description adds the phrase 'with pagination support' which maps to these parameters but does not add syntax constraints, validation rules, or usage examples beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the action (List), the resource (super representatives/witnesses), and the network context (TRON). It specifies the scope includes pagination support. However, it cannot score a 5 due to the unresolved tension with the destructive annotation that calls the actual behavior into question.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions 'pagination support' which implicitly suggests when to use the tool (for batch retrieval), but provides no explicit guidance on when to prefer this over other retrieval tools like get_account, nor does it warn about the destructive nature implied by the annotations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
revoke_approvalADestructiveInspect
Build unsigned transaction to revoke a TRC20 token approval (sets allowance to 0)
| Name | Required | Description | Default |
|---|---|---|---|
| owner | Yes | Token owner address (base58) — the account revoking the approval | |
| spender | Yes | Spender address to revoke | |
| fee_limit | No | Fee limit in TRX (default 15, max 15000) | |
| permission_id | No | Permission ID for multi-sig transactions | |
| contract_address | Yes | TRC20 token contract address |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond annotations (destructiveHint: true, readOnlyHint: false), the description adds crucial context that this builds an 'unsigned transaction' rather than executing immediately, and specifies the exact behavioral outcome ('sets allowance to 0'). This clarifies the preparatory nature of the tool and the specific state change.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single, efficient sentence with zero waste. Key information is front-loaded ('Build unsigned transaction'), followed by purpose ('revoke a TRC20 token approval'), and technical clarification ('sets allowance to 0'). Every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 100% schema coverage and rich annotations, the description provides sufficient context for a blockchain transaction builder. It explains the return type conceptually (unsigned transaction) despite no output schema. Could be improved by explicitly mentioning the signing/broadcast requirement, but 'unsigned' implies this sufficiently.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema adequately documents all parameters. The description adds no parameter-specific details beyond the schema, maintaining the baseline score of 3. It references 'TRC20 token' implicitly mapping to contract_address but doesn't elaborate on parameter semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a specific verb ('Build unsigned transaction'), resource ('TRC20 token approval'), and exact semantic effect ('sets allowance to 0'). It clearly distinguishes from sibling tools like transfer_trc20 or get_trc20_allowance by specifying this is for revoking existing approvals rather than transferring tokens or querying state.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context (revoking approvals) but lacks explicit guidance on when to use this versus checking allowance first with get_trc20_allowance, or workflow guidance that the resulting transaction must be signed and broadcast. No 'when-not' or alternative guidance is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
transfer_trc20ADestructiveInspect
Create an unsigned TRC20 token transfer transaction. Returns transaction hex for signing.
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | Recipient address (base58, starts with T) | |
| from | Yes | Sender address (base58, starts with T) | |
| amount | Yes | Amount in token units (human-readable, e.g., '100.5') | |
| fee_limit | No | Fee limit in TRX, range 0-15000 (default: 100) | |
| permission_id | No | Permission ID for multi-sig transactions | |
| contract_address | Yes | TRC20 contract address (base58, starts with T) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already establish destructiveHint=true and readOnlyHint=false, confirming this is a write operation. The description adds valuable context that the transaction is 'unsigned' and returns hex for signing, clarifying the intermediate state. However, it omits details about energy consumption, broadcasting requirements, or failure modes that would be helpful for a destructive financial operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences totaling 11 words. First sentence establishes the action and object, second clarifies the return value. Zero redundancy, every word earns its place. Perfectly front-loaded with the most critical information (unsigned transaction creation).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately complete for a transaction builder with 100% schema coverage and provided annotations. The description compensates for the missing output schema by specifying the return format (transaction hex). Minor gap: could mention that signing and broadcasting are subsequent steps required to complete the transfer, but 'for signing' implies this workflow adequately.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with clear descriptions for all 6 parameters (from, to, contract_address, amount, fee_limit, permission_id). The description provides no additional parameter semantics beyond the schema, which is acceptable given the comprehensive schema coverage. Baseline score applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent clarity with specific verb 'Create', resource 'unsigned TRC20 token transfer transaction', and scope distinguishing it from transfer_trx (native currency) and trigger_contract (immediate execution). The mention of 'TRC20' specifically identifies this as a token contract interaction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage context by stating it returns 'transaction hex for signing', indicating this is for offline/signing workflows rather than immediate execution. However, lacks explicit when-to-use guidance (e.g., 'use when you need to sign offline' vs explicit alternatives like trigger_contract) and no exclusions or prerequisites mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
transfer_trxADestructiveInspect
Create an unsigned TRX transfer transaction. Returns transaction hex for signing.
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | Recipient address (base58, starts with T) | |
| from | Yes | Sender address (base58, starts with T) | |
| memo | No | Optional memo to attach to the transaction | |
| amount | Yes | Amount in TRX (e.g., '100.5') | |
| permission_id | No | Permission ID for multi-sig transactions |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare destructiveHint=true and openWorldHint=true; the description adds crucial behavioral context not in annotations: specifically that the transaction is 'unsigned' and returns 'hex for signing', indicating this is preparation rather than immediate broadcast.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two short sentences with zero waste. First sentence establishes scope (unsigned TRX transaction), second clarifies output format (hex for signing). Efficiently front-loaded with essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a transaction preparation tool: covers return values despite lacking output schema, references multi-sig via permission_id in schema, and safety profile via annotations. Minor gap: could mention fee/bandwidth implications given destructive annotation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage (all 5 parameters documented), the description appropriately relies on the schema for parameter details and does not need to repeat them. Baseline 3 is correct for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a specific verb ('Create'), clear resource ('unsigned TRX transfer transaction'), and distinguishes from siblings by specifying 'TRX' (native token) versus 'TRC20' (token standard), and 'unsigned' versus presumably signed/broadcast operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the workflow ('Returns transaction hex for signing') suggesting this precedes a signing step, but lacks explicit guidance on when to use this versus transfer_trc20, and omits prerequisites like sufficient balance or bandwidth requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
trigger_constant_contractBDestructiveInspect
Call a read-only (view/pure) smart contract method. No transaction created, no fees. Returns decoded result. Provide either method+params OR data (pre-packed calldata).
| Name | Required | Description | Default |
|---|---|---|---|
| data | No | Pre-packed ABI calldata as hex (0x prefix optional). When provided, method and params are ignored. | |
| from | No | Caller address (base58, starts with T; optional — defaults to zero address) | |
| method | No | Method signature (e.g., 'totalSupply()', 'balanceOf(address)'). Required unless data is provided. | |
| params | No | Method parameters as JSON array. Plain values: ["TJD...", "1000"] (types inferred from method signature). Typed objects also accepted: [{"address": "TJD..."}, {"uint256": "1000"}] | |
| token_id | No | TRC10 token ID for simulating TRC10 token transfers | |
| call_value | No | Amount in SUN to send with call (default: 0, 1 TRX = 1000000 SUN). Required for simulating payable functions. | |
| token_value | No | TRC10 token amount to send with call (default: 0) | |
| contract_address | Yes | Smart contract address (base58, starts with T) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
CRITICAL CONTRADICTION: The description explicitly claims this is a 'read-only' operation that creates 'No transaction', suggesting a safe simulation. However, annotations declare readOnlyHint=false and destructiveHint=true, indicating a state-destructive write operation. These are mutually exclusive. The description also fails to disclose auth requirements, rate limits, or side effects beyond what the misleading annotations suggest.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, zero waste. Front-loaded with purpose ('Call a read-only...'), followed by behavioral constraints ('No transaction...'), output format ('Returns decoded result'), and input guidance ('Provide either...'). Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having 8 parameters and no output schema, the description mentions 'Returns decoded result' which partially compensates for missing output schema. However, the annotation contradiction severely undermines completeness—the tool appears safe but annotations warn of destruction, leaving the agent unsure of the actual behavioral contract. For a 100% coverage schema, the description meets minimum needs but the contradiction creates critical ambiguity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description adds valuable semantic context by documenting the XOR relationship between the 'method+params' approach and the 'data' parameter ('Provide either... OR'), which prevents invalid invocations. It also clarifies 'pre-packed calldata' concept. However, it does not add examples or complex encoding guidance beyond the schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it calls a read-only smart contract method and explicitly notes 'No transaction created, no fees. Returns decoded result.' Specific verb ('Call') and resource ('smart contract method') are present. However, it does not explicitly differentiate from the sibling tool 'trigger_contract', leaving ambiguity about when to use this vs the state-changing alternative.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides input syntax guidance ('Provide either method+params OR data') but lacks contextual guidance on when to select this tool versus 'trigger_contract' or other query tools like 'get_contract_abi'. The phrase 'No transaction created' implies usage context but does not explicitly state when to prefer this over alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
trigger_contractADestructiveInspect
Call a smart contract method. Returns unsigned transaction hex for signing. Provide either method+params OR data (pre-packed calldata).
| Name | Required | Description | Default |
|---|---|---|---|
| data | No | Pre-packed ABI calldata as hex (0x prefix optional). When provided, method and params are ignored. | |
| from | Yes | Caller address (base58, starts with T) | |
| method | No | Method signature (e.g., 'transfer(address,uint256)'). Required unless data is provided. | |
| params | No | Method parameters as JSON array. Plain values: ["TJD...", "1000"] (types inferred from method signature). Typed objects also accepted: [{"address": "TJD..."}, {"uint256": "1000"}] | |
| fee_limit | No | Fee limit in whole TRX (integer), range 0-15000 (default: 100) | |
| call_value | No | Amount to send with call in SUN (default: 0) | |
| permission_id | No | Permission ID for multi-sig transactions | |
| contract_address | Yes | Smart contract address (base58, starts with T) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true, but description crucially clarifies this returns 'unsigned transaction hex for signing' rather than executing immediately—critical behavioral context not present in annotations. Also discloses the XOR input pattern.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, zero waste: purpose (sentence 1), output format (sentence 2), critical input constraint (sentence 3). Perfectly front-loaded with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Compensates well for missing output schema by specifying return format ('unsigned transaction hex'). With 100% schema coverage and good annotations, description adequately covers the 8 parameters. Could mention TRON-specific context (base58 'T' addresses, SUN units) but these are in schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description adds valuable semantic relationship: that data and method+params are mutually exclusive with precedence rules, which is not obvious from isolated schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Call') and resource ('smart contract method'), with specific scope about returning unsigned transaction hex. However, it doesn't explicitly distinguish from sibling 'trigger_constant_contract' (likely for read-only calls), though the 'unsigned transaction hex' implies state-changing intent.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear input constraints ('Provide either method+params OR data'), indicating mutually exclusive parameter usage. However, lacks explicit guidance on when to use this vs 'trigger_constant_contract' or the overall transaction lifecycle (signing/broadcasting steps).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
undelegate_resourceADestructiveInspect
Reclaim previously delegated energy or bandwidth (Stake 2.0). Returns unsigned transaction hex for signing.
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | Address to reclaim from (base58, starts with T) | |
| from | Yes | Delegator address (base58, starts with T) | |
| memo | No | Optional memo to attach to the transaction | |
| amount | Yes | Amount in TRX to undelegate (e.g., '100.5') | |
| resource | Yes | Resource type | |
| permission_id | No | Permission ID for multi-sig transactions |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations declare destructive/mutable hints, the description adds crucial behavioral context: it returns 'unsigned transaction hex for signing' (critical invocation detail) and specifies 'Stake 2.0' protocol version.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first sentence establishes action, resource, and protocol; second sentence discloses return format. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, the description compensates by stating the return format (unsigned hex). Combined with complete input schema coverage and destructive annotations, this provides sufficient context for a blockchain transaction tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents all 6 parameters. The description mentions 'energy or bandwidth' which aligns with the resource enum, but adds no additional syntax, format, or semantic guidance beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Reclaim') with clear resources ('energy or bandwidth') and protocol context ('Stake 2.0'). It implicitly distinguishes from sibling 'delegate_resource' by specifying this reverses a previous delegation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'previously delegated' implies prerequisites (must have active delegation), but the description lacks explicit when-to-use guidance, exclusions, or named alternatives like 'delegate_resource'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
unfreeze_balanceADestructiveInspect
Unstake TRX (Stake 2.0). Returns unsigned transaction hex for signing. Note: unstaked TRX has a 14-day withdrawal period.
| Name | Required | Description | Default |
|---|---|---|---|
| from | Yes | Account address (base58, starts with T) | |
| memo | No | Optional memo to attach to the transaction | |
| amount | Yes | Amount in TRX to unstake (e.g., '100.5') | |
| resource | Yes | Resource type | |
| permission_id | No | Permission ID for multi-sig transactions |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructive/non-readonly status, but the description adds crucial behavioral details: the return format (unsigned hex transaction requiring signing) and the 14-day lockup constraint, neither of which are inferable from annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: action definition, return value explanation, and temporal constraint warning. Information is front-loaded and every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, but the description compensates by stating the return value format. Combined with the withdrawal period warning and consistent annotations covering safety/destructive hints, this is sufficiently complete for a staking mutation tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions for all 5 parameters (from, amount, resource, memo, permission_id). The description adds no parameter-specific semantics beyond the schema, which is appropriate when schema documentation is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly specifies the action (Unstake), resource (TRX), and mechanism (Stake 2.0), distinguishing it from the sibling freeze_balance tool and other resource management operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The 14-day withdrawal period is critical usage context that affects user decisions. However, it could explicitly clarify the relationship to freeze_balance (inverse operation) and undelegate_resource (different unstaking path).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_addressADestructiveInspect
Validate a TRON address and convert supported inputs to TRON base58/hex. Accepts base58 (T...), TRON hex (41...), or Ethereum/EVM format (0x...).
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | TRON address (base58 T..., hex 41..., or Ethereum 0x...) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Critical gap: annotations indicate destructiveHint=true and openWorldHint=true, but description portrays a pure validation/conversion utility with no explanation of what state changes occur or why external network calls are needed for address validation. Does not explain idempotentHint=false.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. First establishes purpose, second specifies input variants. Efficient front-loading of essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a single-parameter utility, but fails to address the surprising destructive/openWorld annotations which raise questions about side effects. No output schema, but description hints at return format (base58/hex).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with complete parameter documentation. Description reinforces the schema with format examples (T..., 41..., 0x...) but adds no additional syntactic constraints or usage semantics beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verbs (validate, convert) and resource (TRON address). Explicitly distinguishes from account analysis siblings by focusing on address format normalization rather than account state inspection.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Lists acceptable input formats (base58, hex, EVM) which implies when to use it (when address format is uncertain), but lacks explicit guidance on when validation is required vs passing addresses directly to sibling tools like transfer_trx or get_account.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
vote_witnessADestructiveInspect
Vote for super representatives. Returns unsigned transaction hex for signing. Requires staked TRX (1 TRX staked = 1 vote).
| Name | Required | Description | Default |
|---|---|---|---|
| from | Yes | Voter address (base58, starts with T) | |
| memo | No | Optional memo to attach to the transaction | |
| votes | Yes | Map of witness address to vote count, e.g., {"TKSXDA...": 100, "TLyqz...": 50} | |
| permission_id | No | Permission ID for multi-sig transactions |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructive, non-idempotent write operations. Description adds critical behavioral details: return format (unsigned hex requiring signing) and domain-specific constraints (1 TRX staked = 1 vote ratio). No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: purpose first, output format second, requirements third. Every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a voting mutation tool despite lacking output schema. Covers return value (unsigned hex), staking prerequisites, and permission context. Could mention revoting/changing votes, but sufficient for safe invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with clear field descriptions. Description mentions the stake-to-vote ratio (1:1) which supplements understanding of the 'votes' parameter, but baseline 3 applies since schema carries full semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Vote') and resource ('super representatives'). Distinct from sibling 'list_witnesses' by action type. Includes scope clarification that it returns unsigned hex rather than executing immediately.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear prerequisites ('Requires staked TRX') and output context ('Returns unsigned transaction hex for signing') that imply a multi-step workflow. Lacks explicit naming of prerequisite tools (e.g., freeze_balance) but gives sufficient constraints for correct usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
withdraw_expire_unfreezeADestructiveInspect
Withdraw TRX that has completed the 14-day unstaking period (Stake 2.0). Returns unsigned transaction hex for signing.
| Name | Required | Description | Default |
|---|---|---|---|
| from | Yes | Account address (base58, starts with T) | |
| memo | No | Optional memo to attach to the transaction | |
| permission_id | No | Permission ID for multi-sig transactions |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds crucial behavioral context not present in annotations: 'Returns unsigned transaction hex for signing' clarifies that the tool prepares but does not submit the transaction. It also provides protocol context ('Stake 2.0'). It aligns with annotations (destructive=true, readOnly=false) without contradicting them.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero redundancy: the first establishes the operation and constraints, the second discloses the return format. Every word earns its place. The description is appropriately front-loaded with the action verb.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a blockchain mutation tool with no output schema, the description adequately compensates by specifying the unsigned hex return format. The 14-day constraint and Stake 2.0 context provide necessary domain specificity. Could be improved by explicitly mentioning that signing and broadcasting are required separately.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the structured fields adequately document all three parameters (from, memo, permission_id). The description does not add semantic details beyond the schema, which is acceptable given the high baseline coverage. Baseline score 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Withdraw') with a specific resource ('TRX') and precise scope condition ('completed the 14-day unstaking period'). It clearly distinguishes this from sibling tools like 'unfreeze_balance' (which initiates unstaking) by specifying this is the post-period withdrawal step for Stake 2.0.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The 'completed the 14-day unstaking period' constraint provides clear contextual guidance on when to invoke this tool versus alternatives like 'freeze_balance' or 'unfreeze_balance'. However, it does not explicitly name the sibling tool that should be called first to initiate the unstaking process.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!