Skip to main content
Glama
Hovsteder

TRON Energy/Bandwidth MCP Server

Server Quality Checklist

92%
Profile completionA complete profile improves this server's visibility in search results.
  • Disambiguation4/5

    Tools are generally well-differentiated with specific purposes covering distinct lifecycle stages (build/sign/broadcast patterns) and resource types. Minor potential confusion exists between register (agent auth) and register_pool (creating a selling pool), and the various get_order* tools, but descriptions clarify boundaries adequately.

    Naming Consistency4/5

    Follows consistent snake_case verb_noun patterns throughout (e.g., buy_energy, get_balance, configure_auto_selling). Minor deviations include abbreviation 'tx' in broadcast_signed_permission_tx versus full 'transaction' in broadcast_transaction, and bare 'register' versus verb_object elsewhere.

    Tool Count3/5

    At 27 tools, the set is heavy for the domain despite covering a complex blockchain resource trading workflow. While permission management necessitates separate build/sign/broadcast steps, and both swaps and voting are included, the count exceeds the typical 3-15 range for well-scoped servers and approaches the 'too many' threshold.

    Completeness3/5

    Covers CRUD operations for pools, auto-selling configuration, and order management, but has notable gaps: despite the server name and price endpoints supporting bandwidth, there is no buy_bandwidth tool (only buy_energy). Additionally, manual claim_rewards and cancel_order operations appear missing.

  • Average 3.9/5 across 24 of 27 tools scored. Lowest: 2.9/5.

    See the tool scores section below for per-tool breakdowns.

  • This repository includes a README.md file.

  • This repository includes a LICENSE file.

  • Latest release: v0.1.0

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • This repository includes a glama.json configuration file.

  • This server provides 27 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • This server has been verified by its author.

Tool Scores

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full disclosure burden. While it mentions 'delegation progress' as specific returned data, it fails to clarify other critical behaviors: what happens if the orderId doesn't exist (error vs null), whether this requires authentication, or if there are rate limits. 'Get' implies read-only safety, but this is not confirmed.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The single sentence is appropriately sized with no redundant phrases. 'Including delegation progress' earns its place by specifying what 'detailed status' encompasses. However, it could be more front-loaded with usage context given the lack of output schema.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a low-complexity tool (1 parameter, flat schema), the description adequately hints at output contents ('detailed status', 'delegation progress') given the absence of an output schema. However, with no annotations to rely on, it should provide more behavioral context to be fully complete.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100% (orderId fully documented as UUID format), establishing the baseline score. The description implies single-entity lookup through 'specific order' but doesn't add syntax details, validation rules, or formatting guidance beyond what's in the schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses a specific verb ('Get') and resource ('order status') and clarifies scope with 'specific order'. It adds valuable context about returned data ('including delegation progress'). However, it doesn't explicitly differentiate from sibling tool 'get_orders' (plural), leaving the selection logic slightly ambiguous despite the implication.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives like 'get_orders' for listing multiple orders. It omits prerequisites (needing a valid orderId first) and doesn't indicate when NOT to use it (e.g., if you only need summary data).

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the behavioral burden. It compensates for missing output schema by listing return fields (status, amounts, transaction hashes, timestamps) and mentions authentication requirement. However, it doesn't explicitly declare the read-only/safe nature of the operation or describe pagination behavior.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three well-structured sentences totaling ~30 words. Information is front-loaded: purpose first, output fields second, auth requirement third. No redundant or wasted language; each sentence contributes distinct information not replicated in structured fields.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Reasonably complete for a query tool: describes purpose, return data structure (compensating for no output schema), and auth needs (compensating for no annotations). However, omits important behavioral context like the default poolAddress behavior (falls back to first registered pool) and lacks safety assertions expected for unannotated tools.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, establishing baseline 3. The description mentions 'stake, vote, claim' which maps to the actionType enum values, and 'your pool' contextualizes poolAddress, adding minor semantic value beyond the schema but not explaining parameter interactions (e.g., that poolAddress defaults to first registered pool).

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific verb 'View' with resource 'execution history of automatic actions (stake, vote, claim)' and scope 'for your pool'. Enumerating the specific action types (stake, vote, claim) helps distinguish from sibling tools like get_auto_selling_config and trigger_vote, though it doesn't explicitly contrast with them.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides prerequisite context with 'Requires API key', indicating authentication needs. However, lacks explicit when-to-use guidance (e.g., 'Use this to audit past automatic operations') or when-not-to-use relative to siblings like get_auto_selling_config.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full disclosure burden. It reveals sorting behavior ('sorted by creation time') and scope ('recent orders'), but omits time range definitions, pagination mechanics (though schema shows no cursor), authentication requirements, or return structure details.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Extremely efficient at 16 words across two sentences. Front-loaded with action verb, zero redundancy, and second sentence delivers behavioral value (sorting) without waste.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for a simple 2-parameter read operation with complete schema documentation. Absence of output schema is partially mitigated by 'Returns recent orders', though specific return fields remain unspecified. Reasonably complete given low complexity.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100%, establishing a baseline score of 3. Description mentions 'optional status filter' which aligns with the non-required status parameter, but adds no semantic depth regarding the enum values or limit behavior beyond schema documentation.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clear verb-resource combination ('Get your order history') with specific scope indicated by 'history'. However, it fails to distinguish from sibling 'get_order_status', which likely retrieves status for specific orders rather than listing historical orders.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides basic functionality description but offers no guidance on when to use this versus 'get_order_status' or other order-related tools. No mention of prerequisites, permissions, or alternative approaches for order retrieval.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full disclosure burden and appropriately notes the API key requirement (auth context). However, it omits other behavioral details such as rate limits, read-only nature, or return format that would be expected given the absence of output schema and annotations.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two efficient sentences with zero waste; the first front-loads the verb and resource while using a colon-separated list to enumerate specific statistics types, and the second clearly states the auth requirement.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Reasonably complete for a zero-parameter tool, listing the specific statistical categories returned to compensate for missing output schema. However, it fails to clarify the relationship with similar sibling 'get_pool_delegations' or describe the response structure/format.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema contains zero parameters, triggering the baseline score per rubric. The description appropriately avoids inventing parameter documentation where none exist, though it implicitly references authentication that belongs in connection configuration rather than input parameters.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific action 'Get' and resource 'statistics for your energy/bandwidth pools', listing concrete metrics (delegations, revenue, utilization, APY). However, it does not explicitly differentiate from sibling tool 'get_pool_delegations' despite overlapping terminology.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Indicates authentication requirement ('Requires API key') which serves as a prerequisite constraint. Lacks explicit guidance on when to select this tool versus alternatives like 'get_pool_delegations' or what prerequisites beyond API access might exist.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden by disclosing on-chain processing, potential delays (few minutes), and minimum amount constraints. Could improve by mentioning irreversibility or auth requirements.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three well-structured sentences with zero waste: action definition, business constraint, and processing behavior are all front-loaded and essential. Slightly less concise than the two-sentence ideal but efficient.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for a single-parameter financial tool but missing critical safety context given no annotations: irreversibility, authorization requirements, and failure modes are not disclosed.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100%, so the baseline score applies. The description repeats the minimum withdrawal constraint already documented in the schema parameter description but adds no additional semantic guidance beyond the schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clearly states the action (withdraw), resource (TRX from account balance), and destination (wallet), but does not explicitly differentiate from sibling tools like get_earnings or execute_swap.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides critical constraint (100 TRX minimum) but offers no guidance on when to use this tool versus alternatives like get_earnings or execute_swap, nor prerequisites for execution.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. It partially compensates by listing what data is returned ('resources', 'duration constraints', 'reserves'), but omits safety traits (read-only nature), error conditions, or side effects that annotations would typically cover.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two efficient sentences with zero waste. Front-loaded with action verb and no filler text. Every clause earns its place by specifying scope or return value details.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for a parameterless getter despite missing output schema. The description enumerates the configuration aspects returned (resources, constraints, reserves), providing sufficient context for invocation without requiring parameter guidance.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Zero parameters present; per rubric baseline is 4. Schema coverage is trivially 100% with no properties to describe.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific verb ('Get') and resource ('auto-selling configuration') with scope ('for your pools'). Implicitly distinguishes from sibling 'configure_auto_selling' by being a getter versus setter, though explicit contrast would strengthen this.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No explicit guidance on when to use versus alternatives like 'configure_auto_selling' or what prerequisites exist (e.g., needing registered pools). Agent must infer usage from the verb 'Get' alone.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. While 'Update' implies mutation, description lacks critical behavioral details: whether this is partial (PATCH) or full replacement (PUT), what happens to unspecified parameters (8 of 9 are optional), side effects, idempotency, or success indicators. 'Toggle' implies boolean switching but doesn't clarify state machine behavior.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three efficient sentences with zero waste. Front-loaded with primary purpose ('Update auto-selling configuration...'). Each sentence earns its place: purpose statement, capability enumeration, prerequisite instruction. No redundant wording or unnecessary verbosity.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for a 9-parameter configuration tool but contains gaps. Covers main functional areas (selling toggles, reserves, duration) and prerequisite workflow. Missing: partial vs full update semantics, output/confirmation details (no output schema exists), and behavioral constraints (e.g., whether minDuration must be less than maxDuration). Acceptable but not comprehensive given mutation complexity.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100%, establishing baseline 3. Description adds valuable semantic context by specifying the data source for 'configId' ('from get_auto_selling_config'), which is prerequisite information not found in schema. Also groups parameters functionally ('Toggle energy/bandwidth selling, set reserves') aiding comprehension beyond individual field descriptions.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific verb ('Update') + resource ('auto-selling configuration') + scope ('for a pool'). Lists specific capabilities (toggle selling, set reserves, duration constraints). Distinguishes from siblings by referencing 'get_auto_selling_config' for prerequisite configId, implying this is modification of existing config vs retrieval.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides clear prerequisite guidance: 'Pass the configId from get_auto_selling_config,' establishing the workflow sequence. Lacks explicit 'when-not-to-use' or alternative selection guidance, but the prerequisite context effectively guides usage without implying exclusions.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description must carry the full behavioral burden. It explains the return values (energy needed and cost in TRX) but fails to confirm this is a read-only estimation operation with no side effects, state changes, or rate limits.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two efficient sentences with no redundancy. The first states purpose, the second states inputs and outputs. Every word earns its place in this compact definition.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the 100% schema coverage and moderate complexity (4 parameters with enums), the description adequately covers the tool's purpose and expected return values (energy amount and TRX cost) despite lacking an output schema. It appropriately delegates parameter details to the schema.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100%, establishing a baseline of 3. The description maps 'transaction count and type' to the business logic of calculating energy needs, adding conceptual context. However, it omits mention of the durationMinutes parameter entirely, leaving that solely to the schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool calculates costs for purchasing TRON Energy or Bandwidth, using specific verbs and resources. However, it stops short of explicitly distinguishing this estimation tool from the sibling 'buy_energy' execution tool, which would prevent potential confusion.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description implies a pre-purchase planning workflow by stating what inputs to 'provide' to 'get' cost estimates, but it lacks explicit guidance on when to use this versus 'buy_energy' or 'get_prices' (siblings that likely execute or retrieve current market data).

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Discloses the API key authentication requirement, which compensates somewhat for missing annotations. However, fails to explicitly state that this is a read-only operation (critical for a financial tool), doesn't mention rate limits, idempotency, or what specific 'deposit information' entails beyond the balance.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two efficient sentences. First sentence declares the core function (balance retrieval), second states the auth requirement. No redundancy or filler content.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Describes the conceptual return values (balance and deposit info) despite lacking an output schema, which is helpful. However, given zero annotations and no output schema, the description should provide more behavioral context (read-only assurance, response structure) for a financial data tool.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Zero parameters present per input schema, triggering the baseline score of 4. The description wisely avoids inventing parameters that don't exist in the schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific action (Get) and resource (current TRX balance and deposit information). TRX specificity helps identify this as a Tron blockchain balance tool. Minor ambiguity exists with sibling 'get_deposit_info' since both mention deposits, but the combination of balance+deposit info provides differentiation.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Identifies the authentication prerequisite ('Requires API key authentication'), which is crucial given no annotations. However, lacks guidance on when to use this versus siblings like 'get_deposit_info' (which also handles deposits) or 'get_available_resources', and doesn't indicate if this is for the user's own account or can query others.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. Adds 'active pools' qualifier (filtering behavior) and 'right now' temporal context. Missing: units, caching policy, error behavior when no pools match duration filter, or return structure.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two efficient sentences, front-loaded with action and scope. Second sentence adds purchasing context without waste. No redundant filler.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    No output schema exists and no annotations provided. Description omits return value structure (totals only? per-pool breakdown? units?) which would be necessary for complete tool selection. Adequate for intent but incomplete for execution.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% with clear descriptions for both resourceType and durationMinutes. Description mentions 'Energy and Bandwidth' and 'pools' which aligns with schema but adds no syntax guidance, format examples, or dependency logic beyond what schema provides. Baseline 3 appropriate for high schema coverage.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clear specific verb 'Get' + distinct resources 'Energy and Bandwidth' + scope 'across all active pools'. Distinguishes from sibling buy_energy (action vs. query) and get_pool_stats (availability vs. statistics).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Phrasing 'how much resource can be purchased right now' implies usage context (checking purchase capacity), but lacks explicit when-to-use guidance or named alternatives among siblings like estimate_cost or get_pool_stats.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden of behavioral disclosure. While it lists the logical content returned, it omits operational characteristics such as whether the operation is read-only, idempotent, requires authentication, or has rate limiting/cost implications. It doesn't contradict the absence of annotations, but fails to fill the gap.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description consists of two efficient sentences. The first defines the scope and contents, the second provides usage context. There is no redundant or wasted language; every clause earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the absence of an output schema, the description adequately compensates by enumerating the specific data categories returned (prices, availability, etc.). For a parameter-less aggregator tool, this level of detail is sufficient, though structural details about the response format would strengthen it further.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema has zero parameters, which establishes a baseline score of 4 according to the evaluation rubric. With no parameters to describe, the description appropriately focuses on the tool's output rather than inputs.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states it provides a comprehensive market overview and enumerates specific data components returned (prices, availability, durations, constraints, transaction types). However, it does not explicitly differentiate this aggregate view from sibling tools like 'get_prices' or 'get_available_resources' which appear to return subsets of this data.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description includes one usage scenario ('Useful for agents to understand what they can purchase'), providing implied context for when to use it. However, it lacks explicit guidance on when NOT to use it (e.g., when specific getters are preferred) or prerequisites for invocation.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full disclosure burden. It successfully describes the return data structure (three earnings categories) but omits explicit safety traits (read-only nature), side effects, or authentication requirements that would help an agent understand this is safe to call repeatedly.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two well-structured sentences with zero waste. Critical information (resource, breakdown structure) front-loaded in first sentence; filtering capability in second. Appropriate density for the tool's complexity.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Strong compensation for missing output schema by enumerating return fields (total/pending/paid). Slight gap: given sibling 'withdraw_earnings' exists, should explicitly clarify this is read-only/reporting functionality to prevent agent confusion about which tool moves funds.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema has 100% description coverage with ISO 8601 format details. Description adds semantic context that filtering is 'optional' (aligning with zero required parameters in schema), but doesn't add syntax examples or clarify behavior when date range is omitted.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Excellent specificity: verb 'Get' + resource 'earnings' + scope 'by pool' + detailed breakdown of return fields (total earned, pending payout, paid out). The description distinguishes this retrieval tool from sibling 'withdraw_earnings' (action vs. query) and 'get_balance' (earnings vs. general balance).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides implied usage through 'Optionally filter by date range,' indicating temporal querying capability. However, lacks explicit guidance on when to use this versus 'withdraw_earnings' or 'get_balance,' and doesn't state prerequisites like pool registration.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully conveys critical mutation behavior (purchase/creates), financial side effects (cost deduction), and execution mechanics (filled by available pools). However, it omits failure modes, synchronous vs. asynchronous behavior, and return value structure.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three sentences with zero redundancy. Front-loaded with the core action (purchase), followed by execution mechanics (market order/pools), and ending with critical financial warning (cost deduction). Every sentence earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the financial complexity and lack of annotations or output schema, the description covers the essential transaction mechanics but exhibits gaps around error handling, order failure scenarios, and return structure. Adequate but incomplete for a financial mutation operation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Input schema has 100% description coverage, establishing a baseline of 3. While the description does not explicitly elaborate on individual parameter semantics, it provides crucial contextual framing ('MARKET order', 'filled by pools') that helps interpret why parameters like durationMinutes and txCount are required, justifying the baseline score.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses specific verbs ('Purchase', 'Creates') with clear resource ('TRON Energy') and mechanism ('MARKET order', 'available pools'). It effectively distinguishes from sibling tools like get_available_resources (query), estimate_cost (pricing check), and get_orders (listing) by emphasizing the active purchase and order creation nature.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides some implicit guidance by specifying this creates a MARKET order (implying immediate execution) and warning that it 'Deducts cost from your balance.' However, it lacks explicit direction on when to use this versus estimate_cost for price checking, or prerequisites like minimum balance requirements.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. It discloses the currency (TRX) and read-like purpose, but omits idempotency, whether the address is static/generated, return format structure, and authentication requirements.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single efficient sentence with front-loaded action verb ('Get'); no redundant words or tautology.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for a zero-parameter tool, but lacks return value specification (e.g., string format, object structure) which would be important given no output schema exists; misses behavioral details expected when annotations are absent.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Zero parameters present in schema; baseline score of 4 applies as per rubric for 0-param tools.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Specific verb 'Get', specific resource 'deposit address', and specifies currency 'TRX' and purpose 'top up your account balance'. Clearly distinguishes from siblings like withdraw_earnings (outbound) or get_balance (view balance).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides implied usage context ('to top up your account balance') indicating when to use (when adding funds), but lacks explicit when-not-to-use guidance or comparison to alternatives like withdraw_earnings.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations, the description carries full burden and discloses key behavioral traits: filters to 'active' delegations only (not historical), specifies resource types returned (energy/bandwidth), and explains expiration metadata. Missing only explicit read-only declaration and pagination/limits behavior.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two sentences with zero waste. First sentence establishes action and scope; second sentence describes output content (necessary since no output schema exists). Appropriately front-loaded and sized.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Simple tool (1 optional param) with no output schema. Description compensates by detailing return content (recipients, resource types, expiration times). Only minor gap is explicit safety confirmation (read-only hint) which annotations would typically provide.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100%, so the parameter is fully documented in the schema description ('Filter by specific pool address...'). The main description adds no parameter-specific semantics, which is acceptable when schema coverage is complete; baseline 3 applies.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clear specific verb ('Get') + resource ('active delegations') + scope ('from your pools'). Distinguishes from siblings like get_pool_stats (aggregate statistics) and check_pool_permissions (access control) by focusing on delegation records and their expiration.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides implied usage context by specifying 'active' delegations and expiration dates, suggesting temporal relevance. However, lacks explicit guidance on when to use versus get_pool_stats or check_pool_permissions (e.g., no 'use this when you need to check individual recipient addresses' guidance).

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full disclosure burden and succeeds: it reveals the automated selection algorithm (highest APY), permission requirements (VoteWitness), reward accumulation mechanics, and auto-claim integration. The only minor gap is not explicitly stating that this creates a blockchain transaction or its irreversibility.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Five sentences efficiently convey purpose, algorithm, permissions, reward mechanics, and authentication. Every sentence delivers distinct operational value. Slightly front-heavy on benefits but prerequisites are clearly delineated. Could combine 'Requires API key' with permission sentence but overall minimal waste.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the single optional parameter and no output schema, the description adequately covers the voting mechanism, automatic selection logic, reward lifecycle, and authorization requirements. It explains what the tool accomplishes beyond parameter transmission, providing sufficient context for an agent to understand the business logic and side effects.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% with the poolAddress parameter fully documented in the schema ('Pool address to vote from...'). The main description mentions 'your first registered pool' which conceptually relates to the parameter but doesn't add syntax guidance or usage scenarios beyond the schema definition. Baseline 3 is appropriate given the schema does the heavy lifting.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description opens with a specific verb-resource combination ('Vote for the best Super Representative') and clarifies the selection mechanism ('platform automatically selects the SR with the best return'). This clearly distinguishes it from siblings like withdraw_earnings, execute_swap, or manual pool operations by specifying the automated APY-optimization behavior.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    States clear prerequisites ('Requires VoteWitness permission granted to the platform', 'Requires API key') and explains the value proposition ('to earn voting rewards'). While it doesn't explicitly contrast with sibling tools, the mention of auto-selection and reward accumulation provides clear context for when to invoke this versus manual delegation or trading tools.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It successfully discloses the post-broadcast verification behavior ('verifies that the platform permissions were correctly applied') and authentication requirement ('Requires authentication'). However, it omits other critical behavioral traits like transaction fees/gas costs, irreversibility, or error handling patterns for blockchain operations.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Four sentences with zero waste. Front-loaded with the core action, followed by post-conditions, workflow prerequisites, and auth requirements. Every sentence earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the 100% schema coverage and absence of an output schema, the description adequately covers the workflow and verification behavior. It mentions the verification outcome which compensates partially for missing return value documentation, though explicit return value description would strengthen it further.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100%, establishing a baseline of 3. The description adds value by specifying that the signedTransaction must be an 'AccountPermissionUpdate' type (where the schema only references a generic 'signed transaction object'), and links the poolAddress to 'platform permissions' context.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses specific verbs ('Broadcast') and resources ('AccountPermissionUpdate transaction', 'TRON blockchain') to clearly define the tool's function. It distinguishes itself from the generic 'broadcast_transaction' sibling by specifying the exact transaction type (AccountPermissionUpdate).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit workflow guidance ('Use this after signing the transaction from build_permission_transaction') establishing the prerequisite tool. However, it does not explicitly state when to use the sibling 'broadcast_transaction' instead, though the specific transaction type implies the distinction.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries significant load by disclosing the sequence ('delegate energy before broadcasting'), payment mechanisms, and cost-estimation fallback. Missing explicit mention of blockchain submission permanence or error handling patterns.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three sentences efficiently structured: purpose declaration, input/process details, and authentication/cost behavior. No redundant text; every clause conveys distinct technical requirements (energy delegation, payment types, unauthenticated behavior).

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Comprehensive coverage of the tool's unique value proposition (auto energy delegation) and operational modes. Given the complexity of blockchain transaction broadcasting and lack of output schema, it adequately covers the distinctive behavioral traits, though it could explicitly note the irreversible nature of broadcasting.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% with complete documentation of the nested txData structure. The description references 'signed transaction data' but does not add semantic details beyond the schema's description, which already covers the required fields and their purpose.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description opens with specific verb 'Broadcast' + resource 'pre-signed TRON transaction' + distinguishing feature 'auto energy delegation', clearly defining the tool's scope. Mention of 'PowerSun' identifies the service provider for the energy delegation aspect, distinguishing it from the sibling 'broadcast_signed_permission_tx' which handles permission-specific transactions.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit authentication guidance ('Works with API key... or x402 USDC payment') and clear fallback behavior ('Without authentication, returns cost estimate'). However, it lacks explicit comparison to sibling 'broadcast_signed_permission_tx' regarding when to use each broadcast method.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full disclosure burden. It successfully adds auth requirement ('Requires API key'), explains the business logic of each permission (e.g., 'to sell energy'), and clarifies this is a verification/confirmation step. Could improve by stating if this is a read-only blockchain call.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Four efficient sentences with zero waste: permission check definition, required permissions, optional permissions, and usage timing/auth. Information is front-loaded with the core action first.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given no output schema exists, the description should ideally characterize the return value (status boolean/object), but it comprehensively covers the operational prerequisites and specific permission logic required for the pool domain. Context signals indicate simple structure (1 param, no nesting), so description adequately covers complexity.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% (poolAddress fully described). Description mentions 'your pool address' which provides context, but doesn't add semantic meaning beyond what the schema already provides regarding format or validation. Baseline 3 appropriate given schema completeness.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Excellent specificity: 'Verify... active permissions' clearly states the action, 'pool address' identifies the resource, and it enumerates specific permission types (DelegateResource, UnDelegateResource, VoteWitness) distinguishing it from permission-granting siblings like build_permission_transaction.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    States explicit timing 'Run this after granting permissions to confirm...' which establishes the workflow sequence. However, it doesn't explicitly name the alternative tools (build_permission_transaction/broadcast_signed_permission_tx) to use when permissions are missing.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With zero annotations, the description carries the full burden and successfully discloses authentication needs ('Requires API key'), data provenance ('fetched directly from TRON blockchain'), and freshness ('live'). Missing rate limits, pagination, or error behaviors (e.g., invalid address), but covers the critical safety and source context.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two highly information-dense sentences with zero redundancy. Front-loaded with specific data payload values. Structure separates data semantics (sentence 1) from operational metadata (sentence 2). Every clause earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Compensates effectively for the lack of output schema by enumerating all returned data fields (TRX balance, frozen resources, etc.). The single parameter is fully described in the schema. Missing explicit error handling documentation, but adequate for a read-only status fetch with simple input.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema has 100% description coverage for the single optional parameter. The description adds contextual framing ('your pool') but does not supplement the schema's technical details (maxLength, default behavior). Baseline 3 is appropriate when the schema already fully documents the parameter.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Specific verb 'Get' plus exact resource 'live blockchain state' and detailed enumeration of data points (TRX balance, frozen resources, voting status, claimable rewards, delegated resources). The phrase 'live' and 'directly from TRON blockchain' distinguishes it from siblings like get_pool_stats (likely cached) and get_balance (generic).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides clear contextual positioning through 'live blockchain state' and 'directly from TRON blockchain', implying real-time use cases vs cached alternatives. 'Requires API key' states a prerequisite. However, it does not explicitly name sibling alternatives (e.g., 'use get_pool_stats for historical data') or failure conditions.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, yet description effectively discloses return structure (price per unit in SUN, tiered by duration) and pricing semantics (minimum prices). Missing operational details like data freshness or rate limits, but compensates adequately for simple read operation.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Optimal two-sentence structure: first establishes operation and scope, second specifies return format. Zero redundancy, every clause adds specific domain information (SUN currency, duration tiers, minimum prices).

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Appropriate for complexity level. Explains return values (SUN pricing, duration tiers) compensating for missing output schema. Establishes TRON blockchain context essential for distinguishing from generic price tools. Could mention relationship to payment/buying workflows.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% with clear enum description. Description adds domain context by explicitly naming 'TRON Energy and Bandwidth' and clarifying the filter behavior (default: all), reinforcing the optional nature of resourceType parameter.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Excellent specificity: states exact resources (TRON Energy/Bandwidth), pricing model (minimum prices), and scope (all duration tiers). Clearly distinguishes from siblings like buy_energy (execution), estimate_cost (calculation), and get_available_resources (inventory) by focusing on current market price retrieval.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Implies usage context ('current' prices suggests pre-transaction market check), but lacks explicit guidance on when to use versus estimate_cost or buy_energy. Does not mention prerequisites like needing to know SUN currency unit or duration tier concepts.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. Discloses critical safety behavior ('All existing account permissions are preserved'), output format ('Returns an unsigned transaction'), auth requirements ('Requires authentication'), and security-sensitive step ('sign with your private key'). Lacks rate limits or idempotency details.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Four sentences with zero waste: (1) transaction purpose and permissions granted, (2) return value and next-step workflow, (3) safety preservation guarantee, (4) auth requirement. Front-loaded with specific transaction type (AccountPermissionUpdate).

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Complete for multi-step blockchain workflow despite missing output schema and annotations. Explains unsigned nature of output, signing requirement, and chain to broadcaster. Could specify what happens if poolAddress lacks initial setup, but sufficient for complexity level.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% with detailed field descriptions (T-address format, VoteWitness permission explanation). Description adds semantic context that permissions are granted to 'PowerSun platform' specifically, but schema carries primary semantic load for parameters.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Specific verb 'Build' paired with specific resource 'AccountPermissionUpdate transaction' and scope (grants PowerSun platform permission to delegate/undelegate/vote). Clearly distinguishes from sibling check_pool_permissions (which checks) and broadcast_signed_permission_tx (which broadcasts signed tx).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicitly chains to sibling tool by stating user must 'broadcast using broadcast_signed_permission_tx' after signing. Establishes clear workflow order: build → sign → broadcast. Provides implicit 'when' by describing the permission-granting purpose.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, description carries full burden and successfully discloses critical behavioral traits: API key authentication requirement, return format (unsigned TX), and client-side signing requirement. Minor gap: no mention of rate limits, liquidity failure modes, or caching behavior.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three tight sentences: (1) core action and platform, (2) return value and signing instruction, (3) next-step workflow and auth requirement. Zero waste, front-loaded with primary purpose.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Comprehensive for a quote tool: covers DEX context, full transaction lifecycle (quote→sign→execute), and auth. Missing explicit error scenarios (insolvency, invalid pairs), but adequately complete given 100% schema coverage and lack of output schema.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema has 100% description coverage (token symbols/addresses, smallest units, basis points), establishing baseline 3. Tool description adds SunSwap DEX context, implicitly validating TRC-20/TRX tokens, but does not add individual parameter semantics beyond schema definitions.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Description uses specific verb 'Get' with clear resources (price quote, unsigned transaction) and scope (SunSwap DEX). It explicitly distinguishes from sibling 'execute_swap' by stating this returns an unsigned TX for client-side signing, while execute_swap handles submission.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicitly maps the workflow: use this tool for quoting, then sign client-side, then submit via execute_swap. Also states prerequisite 'Requires API key'. This provides clear when-to-use guidance and names the specific alternative/next step tool.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full burden and successfully discloses key behavioral traits: returns an API key, performs a session state upgrade ('upgrades the current session'), and has persistent side effects ('will work in this session without needing to reconnect'). Missing error handling details or idempotency notes, but covers primary mutation behavior.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Four sentences, perfectly front-loaded with purpose ('Complete agent registration...'), followed by outputs, downstream effects, and prerequisites. Every sentence provides distinct value (purpose, return values, session effects, auth requirements). Zero redundancy.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a critical authentication tool with 100% schema coverage but no annotations or output schema, the description adequately compensates by explaining the session upgrade mechanism, the API key return, and the tool dependencies enabled. Would benefit from explicit error scenarios, but sufficiently complete for safe invocation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema has 100% coverage with detailed descriptions for all 3 parameters (including specific signing method 'tronWeb.trx.signMessageV2'). Description adds workflow context by referencing 'signed challenge' which semantically binds the challengeId and signature parameters together within the registration flow, exceeding the baseline 3 for high-coverage schemas.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Description clearly states the tool 'Complete[s] agent registration by verifying your signed challenge' with specific verb (verify) and resource (registration). It distinguishes itself from the sibling 'register' tool by referencing the 'signed challenge' and 'Challenge ID from the register tool' in the schema, establishing the sequential relationship.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides clear context on when to use (to 'upgrade the current session to authenticated') and what becomes available afterward (lists specific authenticated tools like 'buy_energy, get_balance, register_pool'). States 'No authentication required' which is critical for an auth-initiation tool. Lacks explicit 'when not to use' (e.g., if already authenticated).

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full disclosure burden and succeeds well: it explains the challenge-response mechanism, specifies the exact signing method (tronWeb.trx.signMessageV2()), notes the return value (challenge string), and states security prerequisites (private key needed, no auth required). Minor gap: doesn't mention challenge expiration or idempotency.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Four sentences with zero waste: action statement, behavioral output, workflow transition, and prerequisites. Front-loaded with the core purpose, logically ordered, and dense with actionable information.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a single-parameter tool with no output schema and no annotations, the description is remarkably complete. It compensates for missing output schema by documenting the return value ('challenge string'), explains the full multi-step registration lifecycle, and covers security context (signing requirements, no auth needed).

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Input schema has 100% description coverage ('Your TRON wallet address...'), so the schema carries the semantic weight. The description mentions 'providing your TRON address' but adds no additional parameter semantics (format, validation context) beyond what the schema already documents. Baseline 3 appropriate for high schema coverage.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the specific action ('Start agent registration') and resource (TRON address) involved. It effectively distinguishes itself from sibling 'verify_registration' by explicitly positioning this as the initiation step that returns a challenge, while verification is handled separately.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Excellent workflow guidance: it specifies this is step 1 of 2, explicitly names the sibling tool 'verify_registration' as the required next step, clarifies the prerequisite (TRON address), and notes 'No authentication required' to establish when it can be called.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. Discloses critical behavioral traits: automatic energy delegation occurs 'before broadcast', and supports two payment modes (API key balance deduction or x402 USDC). Lacks explicit mention of destructive/mutating nature or return value format, but covers major operational side effects.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Four efficient sentences with zero waste: purpose declaration, workflow prerequisites, energy delegation timing, and payment methods. Front-loaded with main action, followed by procedural and technical constraints.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Comprehensive for complex blockchain operation despite missing output schema. Covers payment authorization, energy side effects, and transaction prerequisites. Would benefit from describing success/failure outputs or confirmation behavior, but sufficient for agent invocation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100%, establishing baseline 3. Description adds workflow context beyond schema: emphasizes txData must be pre-signed and sourced from get_swap_quote, clarifying the parameter's provenance and required state before submission.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific action (execute) with resource (pre-signed swap transaction) and key feature (automatic energy delegation). Clearly distinguishes from generic broadcast_transaction sibling by specifying swap-specific functionality and energy delegation.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit workflow: 'Get the unsigned TX from get_swap_quote, sign it, and submit here.' Names prerequisite tool (get_swap_quote) and establishes clear sequence. Also documents payment alternatives (API key vs x402 USDC), helping agent select appropriate auth method.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden and discloses significant behavioral traits: it explains that the tool creates both a pool and auto-selling configuration, clarifies that the platform will subsequently vote/delegate on the user's behalf, and states the API key authentication requirement. Minor gap: it doesn't mention return value structure, idempotency, or error conditions.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Four sentences with zero waste: sentence 1 defines the action, sentence 2 describes created artifacts, sentence 3 lists critical post-requisites with specific permission names, and sentence 4 gives verification instructions plus auth requirements. Information is front-loaded and logically sequenced.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the medium-high complexity (blockchain permissions, delegation, voting), 100% schema coverage, and lack of output schema/annotations, the description adequately covers the registration workflow and permission model. Minor deduction for not describing the return value (e.g., pool ID, confirmation status) or whether registration is idempotent.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    While the schema has 100% coverage (baseline 3), the description adds valuable business context by framing the booleans (sellEnergy, sellBandwidth, autoVote) within the 'selling pool' concept and explaining that auto-voting is for 'best Super Representative' rewards. It could be elevated to 5 by explaining parameter interactions (e.g., what happens if both selling options are false).

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description opens with a specific verb (Register), resource (TRON address), and scope (energy/bandwidth selling pool on PowerSun). It clearly distinguishes this as a creation/setup tool rather than configuration or querying tools found in the sibling list.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Excellent workflow guidance: it explicitly states the post-requisite (grant active permissions), lists the specific permission types needed (DelegateResource, UnDelegateResource, VoteWitness), references the exact sibling tool for verification (check_pool_permissions), and notes the API key requirement. This creates a clear when-to-use and what-to-do-next narrative.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

powersun-tron-mcp MCP server

Copy to your README.md:

Score Badge

powersun-tron-mcp MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Hovsteder/powersun-tron-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server