powersun-tron-mcp
Server Details
TRON Energy marketplace + DEX swap aggregator for AI agents. 27 MCP tools.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- Hovsteder/powersun-tron-mcp
- GitHub Stars
- 1
- Server Listing
- TRON Energy/Bandwidth MCP Server
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
27 toolsbroadcast_signed_permission_txBroadcast Signed Permission TransactionAInspect
Broadcast a signed AccountPermissionUpdate transaction to the TRON blockchain. After broadcasting, verifies that the platform permissions were correctly applied. Use this after signing the transaction from build_permission_transaction. Requires authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| poolAddress | Yes | The pool address that the permission was granted on | |
| signedTransaction | Yes | The signed transaction object from tronWeb.trx.sign() |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses post-broadcast verification step ('verifies that the platform permissions were correctly applied') and authentication requirement. States blockchain target. Could improve by mentioning failure behavior or irreversibility of blockchain transactions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, zero waste. Front-loaded with action ('Broadcast...'), followed by verification behavior, then usage prerequisites. Every sentence earns its place with distinct information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a blockchain broadcast tool with 100% schema coverage. Explains the verification side-effect (compensating for missing output schema). Lacks detail on error handling if verification fails or transaction is rejected.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline of 3. Description adds workflow context ('after signing the transaction from build_permission_transaction') implying the source of signedTransaction, but doesn't add syntax details, format constraints, or semantics for poolAddress beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb ('Broadcast') + resource ('signed AccountPermissionUpdate transaction') + target ('TRON blockchain'). Explicitly names the transaction type (AccountPermissionUpdate), clearly distinguishing it from the generic sibling 'broadcast_transaction' and pairing it with 'build_permission_transaction'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit workflow guidance: 'Use this after signing the transaction from build_permission_transaction.' Establishes clear sequence with sibling tool. Includes prerequisite 'Requires authentication.' Missing explicit exclusion (when NOT to use vs generic broadcast).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
broadcast_transactionBroadcast TransactionAInspect
Broadcast a pre-signed TRON transaction with auto energy delegation. Send your signed transaction data and PowerSun will delegate energy before broadcasting. Works with API key (balance deduction) or x402 USDC payment. Without authentication, returns cost estimate.
| Name | Required | Description | Default |
|---|---|---|---|
| txData | Yes | Pre-signed transaction object with txID, raw_data, raw_data_hex, and signature |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Strong behavioral disclosure given zero annotations: explains 'PowerSun will delegate energy before broadcasting' (sequencing), payment mechanisms (balance deduction vs USDC), and dual-mode behavior (estimate-only without auth). Missing: error handling, retry behavior, and successful response format (critical since no output schema exists).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three tightly constructed sentences with zero waste: (1) Core action + differentiator, (2) Process flow, (3) Payment/auth modes. Logical front-loading of purpose. Every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Moderate completion for high-complexity blockchain tool. Covers payment flows and energy delegation well, but lacks description of successful output (tx hash? receipt? confirmation?) despite absence of output schema. Given 100% input schema coverage, focus should have been on output behavior and error conditions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so expected baseline is 3. Description confirms 'signed transaction data' aligns with schema's 'Pre-signed transaction object' but adds no additional constraints, validation rules, or format details beyond the comprehensive schema documentation. No additional semantic value provided above the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: uses concrete verb 'Broadcast', identifies resource as 'pre-signed TRON transaction', and distinguishes via unique features 'auto energy delegation' and 'PowerSun' delegation service. Clearly differentiates from sibling 'build_permission_transaction' (construction vs. broadcasting) and 'buy_energy' (standalone energy purchase).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear authentication context: distinguishes between authenticated usage (API key with balance deduction or x402 USDC payment) and unauthenticated usage (returns cost estimate). However, lacks explicit guidance on when to use 'estimate_cost' sibling vs. this tool's estimate mode, or when to use 'broadcast_signed_permission_tx' instead.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
build_permission_transactionBuild Permission TransactionAInspect
Build an AccountPermissionUpdate transaction that grants the PowerSun platform permission to delegate/undelegate resources and optionally vote on your behalf. Returns an unsigned transaction that you must sign with your private key and then broadcast using broadcast_signed_permission_tx. All existing account permissions are preserved. Requires authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| includeVote | No | Include VoteWitness permission for earning SR rewards (default: true) | |
| poolAddress | Yes | TRON address to grant permissions on (your pool address, T-address format) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses critical behavioral traits: returns unsigned (not signed) transaction, requires private key signing, requires authentication, preserves existing permissions, targets PowerSun platform specifically. Minor gap: does not mention idempotency, validation failures, or gas/energy costs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, zero waste. Front-loaded with purpose (Sentence 1), followed by workflow/integration (Sentence 2), safety guarantee (Sentence 3), and prerequisites (Sentence 4). Each sentence delivers distinct value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complex blockchain operation with 100% schema coverage but no annotations/output schema. Description adequately covers return value type (unsigned transaction), next-step integration, and safety guarantees. Sufficient for an agent to invoke correctly, though could briefly mention TRON chain context or error scenarios.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage (baseline 3). Description adds domain context not in schema: 'PowerSun platform' clarifies the grantee, and 'optionally vote on your behalf' maps the includeVote boolean to user-facing behavior (voting delegation). Explains poolAddress implicitly as the account granting permissions without repeating schema details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Build') with exact transaction type ('AccountPermissionUpdate'). Explicitly identifies the resource (PowerSun platform permissions) and scope (delegate/undelegate + optional voting). Differentiates from sibling broadcast_signed_permission_tx by clarifying this produces an unsigned transaction, not a broadcast one.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit workflow guidance: states output is 'unsigned transaction that you must sign with your private key and then broadcast using broadcast_signed_permission_tx'—directly naming the exact sibling tool to use next. Prerequisites clearly stated ('Requires authentication'). Safety constraint noted ('All existing account permissions are preserved').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
buy_energyBuy EnergyBInspect
Purchase TRON Energy for a target address. Creates a MARKET order that will be filled by available pools. Deducts cost from your balance.
| Name | Required | Description | Default |
|---|---|---|---|
| txType | No | Transaction type (default: trc20_transfer) | |
| txCount | Yes | Number of transactions you need energy for | |
| resourceType | No | Resource type (default: ENERGY) | |
| targetAddress | Yes | TRON address to delegate energy to (T-address format) | |
| durationMinutes | No | Duration in minutes. Allowed: 2, 5, 10, 15, 30, 60, 360, 1440, 10080, 43200. Default: 60. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully explains the mechanism ('filled by available pools') and side effects ('Deducts cost from your balance'), but fails to describe the return value or async status—critical omissions for a financial transaction tool with no output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste, well-structured: purpose (sentence 1), execution mechanism (sentence 2), financial impact (sentence 3). Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 5 parameters, 100% schema coverage, and no output schema, the description adequately covers the operation's purpose and side effects but omits the return structure (e.g., order ID, confirmation status). For a purchasing tool without return type annotations, this is a notable gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing baseline 3. The description adds minimal semantic context beyond the schema—referencing 'MARKET order' adds behavioral framing but doesn't clarify parameter relationships (e.g., how 'txCount' relates to 'durationMinutes' or 'txType').
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Purchase') with specific resource ('TRON Energy') and target ('target address'). Mentions 'MARKET order' which hints at order mechanics. However, lacks explicit differentiation from sibling tools like 'execute_swap' or guidance on when to use this vs 'estimate_cost'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives (e.g., when to call 'estimate_cost' first, or when to use 'get_available_resources'). The 'MARKET order' mention implies immediate execution but doesn't state prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_pool_permissionsCheck Pool PermissionsAInspect
Verify that the platform has the required active permissions on your pool address. Required: DelegateResource (to sell energy), UnDelegateResource (to reclaim). Optional: VoteWitness (to vote for SRs and earn rewards). Run this after granting permissions to confirm the platform can operate your pool. Requires API key.
| Name | Required | Description | Default |
|---|---|---|---|
| poolAddress | No | Pool address to check. If omitted, uses your first registered pool. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description carries full burden. It discloses specific permissions checked (DelegateResource, UnDelegateResource, VoteWitness) and auth requirement ('Requires API key'), but fails to explicitly state this is read-only/safe or describe the response format when permissions are missing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four tightly constructed sentences with zero waste: purpose statement, permission enumeration, workflow guidance, and auth requirement. Front-loaded with the core action and no redundant phrases.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete for a single-parameter verification tool. Covers the critical permission types being validated and operational prerequisites (API key). Lacks only explicit read-only safety disclosure and return value details, which would be valuable given no output schema exists.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage for the single parameter ('Pool address to check...'). Description mentions 'pool address' in the purpose statement but adds no additional semantic detail beyond the schema's explanation of optional behavior and default fallback.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Verify' paired with clear resource 'permissions on your pool address'. Explicitly distinguishes from siblings like build_permission_transaction (which creates transactions) and verify_registration (which checks user registration) by focusing on checking active pool permissions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'Run this after granting permissions to confirm the platform can operate your pool'. This provides clear temporal workflow guidance. However, lacks explicit 'when not to use' guidance naming the alternative tools for actually granting permissions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
configure_auto_sellingConfigure Auto-SellingAInspect
Update auto-selling configuration for a pool. Toggle energy/bandwidth selling, set reserves, duration constraints. Pass the configId from get_auto_selling_config.
| Name | Required | Description | Default |
|---|---|---|---|
| active | No | Enable/disable auto-selling | |
| configId | Yes | Configuration ID to update (from get_auto_selling_config) | |
| sellEnergy | No | Enable/disable energy selling | |
| maxDuration | No | Maximum rental duration | |
| minDuration | No | Minimum rental duration (e.g., "5min", "1h", "1d") | |
| allowRenewals | No | Allow order renewals | |
| sellBandwidth | No | Enable/disable bandwidth selling | |
| reservedEnergy | No | Energy to keep reserved (not sold) | |
| reservedBandwidth | No | Bandwidth to keep reserved (not sold) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. States 'Update' implying mutation, but fails to clarify critical behavioral details: whether omitted fields preserve existing values (partial update) or are cleared, side effects, permissions required, or reversibility. With 8 optional parameters out of 9, the partial-update nature should be explicit.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first defines the operation and its capabilities, second provides the prerequisite. Efficiently front-loaded with no filler text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool has 9 parameters with 100% schema coverage (well documented), but is a mutation operation with no annotations and no output schema. Description omits behavioral safety details (idempotency, atomicity, error conditions) expected for configuration update tools of this complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description adds crucial semantic context for 'configId' by specifying it comes 'from get_auto_selling_config'—this cross-reference is not in the schema definition. Also groups parameters functionally (reserves, duration constraints) aiding comprehension.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear action ('Update'), resource ('auto-selling configuration'), and scope ('for a pool'). Lists specific capabilities (toggle energy/bandwidth, set reserves, duration constraints). Does not explicitly differentiate from sibling get_auto_selling_config in the text, though implied by verb choice.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit prerequisite instruction: 'Pass the configId from get_auto_selling_config.' This establishes the workflow dependency on the sibling tool. However, lacks explicit 'when not to use' or alternative configuration methods.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
estimate_costEstimate CostAInspect
Calculate the cost of purchasing TRON Energy or Bandwidth. Provide transaction count and type to get the energy needed and cost in TRX.
| Name | Required | Description | Default |
|---|---|---|---|
| txType | No | Transaction type (default: trc20_transfer). trc20_transfer_new = first-time transfer to an address. | |
| txCount | Yes | Number of transactions | |
| resourceType | No | Resource type (default: ENERGY) | |
| durationMinutes | No | Duration in minutes. Allowed: 2, 5, 10, 15, 30, 60, 360, 1440, 10080, 43200. Default: 60. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Explains the calculation logic (inputs yield energy needed and TRX cost) but omits: whether calls consume quota, real-time vs cached pricing, or idempotency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. First states purpose, second states required inputs and outputs. Appropriately sized and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Reasonably complete for a 4-parameter estimation tool. Compensates for missing output schema by describing return values ('energy needed and cost in TRX'). Could strengthen by clarifying relationship to 'buy_energy'.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description mentions 'transaction count and type' (txCount, txType) but adds no semantic meaning beyond what schema already documents for these or other parameters (resourceType, durationMinutes).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb (calculate) + specific resources (TRON Energy/Bandwidth). Implicitly distinguishes from sibling 'buy_energy' by emphasizing estimation ('Calculate the cost') rather than execution.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implicit usage context ('Provide transaction count and type to get...'), but lacks explicit when-to-use guidance versus alternatives like 'buy_energy'. No prerequisites or exclusions stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
execute_swapExecute SwapAInspect
Execute a pre-signed swap transaction with automatic energy delegation. Get the unsigned TX from get_swap_quote, sign it, and submit here. Energy is delegated before broadcast. Works with API key (balance deduction) or x402 USDC payment.
| Name | Required | Description | Default |
|---|---|---|---|
| txData | Yes | Pre-signed swap transaction from get_swap_quote |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden and successfully reveals key behavioral traits: automatic energy delegation occurs before broadcast, and authentication works via API key (balance deduction) or x402 USDC payment. Minor gap: does not explicitly state the destructive/irreversible nature of swap execution.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four tightly constructed sentences with zero waste: sentence 1 defines the action, sentence 2 provides workflow prerequisites, sentence 3 clarifies timing of energy delegation, and sentence 4 specifies payment methods. Well front-loaded and structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of blockchain swap execution and lack of output schema, the description adequately covers the critical execution context (energy delegation, payment mechanisms, prerequisite chain). Minor gap: could mention success/failure behavior or return value structure expected from the execution.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% ('Pre-signed swap transaction from get_swap_quote'), so the schema already fully documents the txData parameter. The description reinforces this usage but does not add syntax, format details, or constraints beyond the schema, warranting the baseline score of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with the specific action ('Execute') and resource ('pre-signed swap transaction'), and distinguishes this tool from siblings by stating it receives input from 'get_swap_quote' and handles the submission step after signing, clearly positioning it in the workflow chain.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly defines the prerequisite workflow chain ('Get the unsigned TX from get_swap_quote, sign it, and submit here'), indicating exactly when to use this tool versus its sibling get_swap_quote.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_auto_action_historyAuto-Action HistoryBInspect
View execution history of automatic actions (stake, vote, claim) for your pool. Shows status (success/failed/skipped), amounts, transaction hashes, and timestamps. Requires API key.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of records to return (default: 20, max: 100) | |
| actionType | No | Filter by action type | |
| poolAddress | No | Pool address to get history for. If omitted, uses your first registered pool. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully documents output fields (status, amounts, transaction hashes, timestamps) and authentication requirements (API key), but omits rate limits, pagination details, or cache behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: purpose first, then output fields, then auth requirement. Each sentence earns its place with no redundancy. Appropriate length for the tool complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only history retrieval tool with no output schema, the description adequately covers the essential context: functionality, return value structure, and authentication. It compensates for missing output schema by listing specific fields returned.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already comprehensively documents all three parameters (limit constraints, actionType enum, poolAddress fallback behavior). The description adds no specific parameter guidance, meeting the baseline expectation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'View execution history of automatic actions' with specific verbs and resources. It distinguishes scope by listing the three action types (stake, vote, claim), differentiating from siblings like get_auto_selling_config (which handles selling configuration, not execution history) and get_earnings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While it states 'Requires API key' as a prerequisite, it provides no explicit guidance on when to use this tool versus alternatives like get_pool_stats or get_earnings, nor does it mention when-not-to-use scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_auto_selling_configAuto-Selling ConfigBInspect
Get current auto-selling configuration for your pools. Shows which resources are being sold, duration constraints, and reserves.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description must carry full behavioral burden. It partially compensates by disclosing what data is returned (resources, duration constraints, reserves), but omits critical safety context like whether this is read-only, cached vs. real-time data, or what happens when no configuration exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two tightly constructed sentences with zero waste. The first sentence establishes the core purpose (Get configuration), while the second immediately clarifies the content scope (what is shown). Perfect front-loading.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Lacking both an output schema and annotations, the description provides minimal compensatory information about the return structure. While it lists conceptual fields returned, it omits error conditions, default values, or the data format, leaving gaps for a configuration management tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema contains zero parameters, establishing a baseline score of 4. The description correctly omits parameter discussion as there are none to document, avoiding confusion about implicit inputs.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and resource ('auto-selling configuration'), clearly distinguishing this from the sibling tool 'configure_auto_selling'. It also specifies scope ('for your pools') and enumerates what aspects are returned ('resources', 'duration constraints', 'reserves').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives, preconditions for use (e.g., having existing pools), or when to prefer 'get_auto_action_history' for monitoring instead. The user must infer this is a read operation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_available_resourcesGet Available ResourcesAInspect
Get total available Energy and Bandwidth across all active pools. Shows how much resource can be purchased right now.
| Name | Required | Description | Default |
|---|---|---|---|
| resourceType | No | Filter by resource type (default: both) | |
| durationMinutes | No | Filter pools that support this duration |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Adds temporal scope ('right now') and pool status ('active'), but omits safety profiles (read-only vs mutation), error conditions (empty pools), or performance characteristics (cached vs real-time).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences with zero redundancy. Key information (resources, scope, purpose) is front-loaded and immediately actionable.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple query with full input schema coverage and no output schema, describing the conceptual return (total available amounts for purchase) is adequate. Would benefit from indicating whether returns are aggregated per pool or totals, but sufficient for tool selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage with clear enums and types. Description reinforces the resource domain conceptually but does not add parameter-specific syntax, defaults, or interaction logic beyond what the schema already documents.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Get' with clear resources (Energy/Bandwidth) and scope (across all active pools). The phrase 'can be purchased' effectively distinguishes this inventory check from the purchase action (buy_energy) and from general statistics (get_pool_stats).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear usage context ('can be purchased right now') indicating it's for checking availability before purchasing, but lacks explicit comparison to sibling query tools like get_pool_stats or explicit 'when not to use' guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_balanceGet BalanceAInspect
Get your current TRX balance and deposit information. Requires API key authentication.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully notes the authentication requirement but omits other behavioral details like rate limits, caching behavior, or error conditions. Just meets minimum viable disclosure for a read operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Optimal two-sentence structure with zero waste. First sentence front-loads the operation purpose; second sentence states the auth requirement. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters and no output schema, the description adequately covers the essential contract: what it returns (balance + deposit info) and authentication needs. Could benefit from mentioning return format expectations, but acceptable for a simple read-only state query.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present, establishing a baseline of 4. The description appropriately avoids inventing parameter documentation where none exists in the schema (100% coverage of zero properties).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb-resource pair ('Get' + 'TRX balance and deposit information') with specific scope. However, it fails to differentiate from sibling 'get_deposit_info', which may cause confusion about which tool to use for deposit-related queries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides prerequisite context ('Requires API key authentication') but lacks explicit guidance on when to use this versus the sibling 'get_deposit_info' or other account-state tools. The auth requirement is present but comparative usage guidance is missing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_deposit_infoGet Deposit InfoAInspect
Get the deposit address to top up your account balance with TRX.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description carries full burden. It discloses the cryptocurrency type (TRX) and directional flow (inbound), but fails to clarify critical behavioral traits: whether the address is persistent vs. single-use, if the operation is idempotent, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely efficient single sentence (12 words) with zero redundancy. Action and intent are front-loaded; every word serves a specific purpose in defining the tool's function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a zero-parameter read operation. Mentions 'deposit address' which hints at the return value (compensating somewhat for missing output schema). Given low complexity, covers essential context though could specify if address is permanent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema contains zero parameters. Description appropriately implies no user inputs are required to retrieve the deposit address, meeting the baseline expectation for zero-parameter tools.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Uses specific verb 'Get' with clear resource 'deposit address' and purpose 'to top up your account balance with TRX'. Effectively distinguishes from siblings like 'withdraw_earnings' (outbound) and 'get_balance' (status check) by specifying this is for inbound funding.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no explicit guidance on when to use versus alternatives, nor prerequisites like account verification. Agent must infer this is the entry point for funding based solely on the purpose statement.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_earningsGet EarningsBInspect
Get earnings breakdown by pool: total earned, pending payout, paid out. Optionally filter by date range.
| Name | Required | Description | Default |
|---|---|---|---|
| endDate | No | End date (ISO 8601) | |
| startDate | No | Start date (ISO 8601, e.g. 2026-01-01) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full disclosure burden. While it describes the conceptual data structure returned (the three payout states), it fails to explicitly state this is a read-only operation (critical given 'withdraw_earnings' exists), doesn't mention data freshness (real-time vs cached), and omits response format details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with zero redundancy. The first sentence uses a colon structure to enumerate the specific earnings components, while the second clearly addresses the optional filtering capability. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple schema (two optional date strings) and absence of an output schema, the description partially compensates by describing the conceptual structure of the earnings data. However, it lacks important context about the read-only nature and pool-specific requirements that would be expected for a financial data tool with no annotation support.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with both date parameters fully documented in the schema (ISO 8601 format, examples). The description adds minimal semantic value beyond the schema, primarily emphasizing that the filtering is 'optional,' which aligns with the zero required parameters in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (get) and resource (earnings breakdown), and specifically lists the three components returned (total earned, pending payout, paid out). However, it fails to distinguish this from the sibling tool 'withdraw_earnings', which is a critical operational distinction since one reads data while the other executes a transaction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions that date range filtering is optional, hinting at parameter usage. However, it provides no guidance on when to use this tool versus alternatives like 'get_balance' or 'get_pool_stats', nor does it mention prerequisites such as pool registration requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_market_overviewMarket OverviewBInspect
Comprehensive market overview: prices, availability, allowed durations, order constraints, and transaction types. Useful for agents to understand what they can purchase.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions what data is returned but fails to disclose operational characteristics such as caching behavior, rate limits, whether data is real-time or cached, or the structure/format of the returned overview object.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of exactly two efficient sentences. It is front-loaded with specific content categories and avoids redundancy. Every sentence earns its place by conveying distinct information (data contents vs. utility).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the description lists the information categories contained in the overview, the absence of an output schema means it should ideally describe the return structure or format. For a read-only discovery tool with zero parameters, listing the content domains provides minimum viable completeness, but lacks structural details that would aid parsing.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters. Per the scoring guidelines, 0 parameters establishes a baseline score of 4, as there are no parameter semantics to clarify beyond what the empty schema already communicates.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly defines the tool's purpose as retrieving a 'comprehensive market overview' and lists specific data categories returned (prices, availability, durations, constraints, transaction types). However, it does not explicitly differentiate from close siblings like 'get_prices' or 'get_available_resources' beyond implying broader scope through the word 'comprehensive'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage context ('Useful for agents to understand what they can purchase'), suggesting when to use it. However, it lacks explicit guidance on when NOT to use it or specific alternatives to use when only subset data (e.g., just prices) is needed, which is important given the many specialized sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_onchain_statusOn-Chain Pool StatusAInspect
Get live blockchain state for your pool: TRX balance, frozen resources (Energy/Bandwidth), voting status, claimable rewards, and delegated resources. Data is fetched directly from TRON blockchain. Requires API key.
| Name | Required | Description | Default |
|---|---|---|---|
| poolAddress | No | Pool address to check. If omitted, uses your first registered pool. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses 'live' data source (TRON blockchain) and authentication requirement (API key). Since no annotations exist, description carries full burden; it misses explicit read-only safety declaration, rate limits, or error behaviors (e.g., invalid pool address handling).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences. First sentence front-loads the value proposition with colon-separated return values. Second sentence covers critical context (data source, auth). No redundancy, though 'your pool' appears twice (once in first sentence, implied in second).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Compensates well for missing output schema by enumerating all return fields (TRX balance, resources, voting, rewards). Tool complexity is low (1 optional param), making this level of description adequate, though blockchain finality/confirmation details could strengthen it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage for the single optional parameter (poolAddress), including default behavior. Description adds no additional parameter guidance, which is acceptable given complete schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: verb 'Get' + resource 'blockchain state' + specific scope 'your pool'. Lists exact data points returned (TRX balance, frozen resources, voting status, claimable rewards, delegated resources), distinguishing it from siblings like get_balance (generic) or get_pool_delegations (narrower).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
States prerequisite 'Requires API key' but lacks explicit when-to-use guidance versus alternatives like get_pool_stats or get_balance. Does not mention that omitting poolAddress defaults to first registered pool (though schema covers this).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_ordersGet OrdersAInspect
Get your order history with optional status filter. Returns recent orders sorted by creation time.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of orders to return (default: 20) | |
| status | No | Filter by order status |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden. It successfully adds sorting behavior ('sorted by creation time') and scope hint ('recent orders'), but fails to define the time window for 'recent', disclose pagination behavior beyond the limit parameter, or mention authentication/authorization requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. Front-loaded action ('Get your order history'), immediately followed by key capability (filter) and return behavior (sorting). Every word earns its place; no redundancy with structured schema data.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple 2-parameter retrieval tool without output schema, the description adequately covers the essential behavioral traits (filtering, sorting). It omits the specific time range of 'recent' and doesn't describe return field structure, but captures sufficient context for the tool's complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing baseline 3. The description mentions 'optional status filter' which aligns with but doesn't extend beyond the schema documentation. No additional syntax guidance or parameter relationships are provided beyond what the schema already specifies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Get' with resource 'order history' and mentions the optional status filter capability. However, it doesn't explicitly distinguish from sibling tool 'get_order_status' (likely for single-order queries), leaving some ambiguity about when to use which list-oriented vs. detail-oriented tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Mentions 'optional status filter' which implies usage patterns, but provides no explicit guidance on when to use this versus 'get_order_status' or other order-related tools. No prerequisites or exclusions are stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_order_statusGet Order StatusBInspect
Get detailed status of a specific order including delegation progress.
| Name | Required | Description | Default |
|---|---|---|---|
| orderId | Yes | Order ID (UUID format) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Mentions 'delegation progress' which hints at the returned data structure, but lacks disclosure on error conditions (e.g., invalid UUID), caching, or auth requirements. Does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with action verb. No waste, though brevity leaves room for additional context. Efficient structure.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple 1-parameter read operation with complete schema coverage. However, 'delegation progress' introduces domain-specific complexity (tying to pool delegation concepts) that is unexplained, and lack of output schema means description should ideally hint at return structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage ('Order ID (UUID format)'). Description implies the parameter with 'specific order' but adds no syntax, validation, or semantic constraints beyond what the schema already provides. Baseline 3 appropriate given complete schema self-documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Uses specific verb 'Get' with resource 'order status' and distinguishes from sibling 'get_orders' by specifying 'specific order' (singular). Adds value by mentioning 'delegation progress' as returned data, though 'detailed status' slightly restates the title.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus 'get_orders' (list) or 'get_onchain_status'. No prerequisites or conditions mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_pool_delegationsPool DelegationsBInspect
Get active delegations from your pools. Shows who received energy/bandwidth and when delegations expire.
| Name | Required | Description | Default |
|---|---|---|---|
| poolAddress | No | Filter by specific pool address. If not provided, shows all your pools. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It successfully indicates the read-only nature via 'Get' and discloses return value content (recipients, resource types, expiration times). However, it lacks safety disclosures (read-only confirmation, auth requirements) or rate limit warnings that would aid invocation confidence.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two well-structured sentences with zero redundancy. First sentence establishes operation and scope; second sentence details return value structure. Information is front-loaded and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for a single-parameter read operation. Compensates for missing output schema by describing return value contents (who received, resource types, expiration). Would benefit from explicit read-only affirmation or pagination behavior, but sufficient for tool selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing baseline 3. The description reinforces the ownership semantics ('your pools') aligning with the schema's 'your pools' reference, but does not add parameter syntax, format examples, or validation rules beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb+resource combination ('Get active delegations from your pools'). Specifies resource types handled (energy/bandwidth) and temporal scope (active, expiration). Distinguishes from siblings implicitly by subject matter but lacks explicit differentiation from similar tools like get_pool_stats or check_pool_permissions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use versus alternatives (e.g., get_pool_stats), prerequisites, or conditions. The optional nature of poolAddress is documented in the schema description rather than the main description text.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_pool_statsPool StatisticsAInspect
Get comprehensive statistics for your energy/bandwidth pools: delegations, revenue, utilization, APY. Requires API key.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully notes the API key authentication requirement and enumerates the specific statistics returned (delegations, revenue, utilization, APY), but omits details regarding data freshness, caching behavior, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two highly efficient sentences with zero waste: the first front-loads the action and resource with specific metric enumeration via colon-separated list, and the second states the authentication requirement. Every word serves a distinct informational purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters and no output schema, the description adequately enumerates the specific statistics categories returned. However, with numerous domain-overlapping siblings (get_pool_delegations, get_earnings), additional context on aggregation methodology or return structure would strengthen completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero properties, establishing a baseline score of 4 per the rubric. The description correctly omits parameter discussion since none exist; the API key requirement is appropriately framed as a behavioral authentication constraint rather than an input parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
"Get comprehensive statistics" provides a specific verb and resource (pool statistics). The description effectively distinguishes from sibling get_pool_delegations by claiming a broader scope ('comprehensive') and enumerating distinct metric categories including revenue, utilization, and APY beyond just delegations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description states 'Requires API key,' indicating a usage prerequisite, but lacks explicit guidance on when to select this tool versus siblings like get_pool_delegations, get_earnings, or get_balance. Tool selection criteria must be inferred from the enumerated metric categories.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_pricesGet Current PricesAInspect
Get current TRON Energy and Bandwidth minimum prices for all duration tiers. Returns price per unit in SUN for each available duration.
| Name | Required | Description | Default |
|---|---|---|---|
| resourceType | No | Filter by resource type (default: all) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses return unit (SUN) and scope (all duration tiers/minimum prices), but lacks rate limits, auth requirements, caching behavior, or explanation of what duration tiers represent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two well-constructed sentences. Front-loaded action, specific domain terms (SUN, duration tiers), zero waste. Every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read tool with no output schema, description adequately covers return format (price per unit in SUN) and scope. Missing only minor details like real-time vs cached data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear parameter description. Description mentions 'TRON Energy and Bandwidth' which maps to the enum values, but adds no syntax examples or detailed semantics beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb 'Get' with specific resources 'TRON Energy and Bandwidth minimum prices' and scope 'all duration tiers'. Distinguishes from generic siblings like estimate_cost or get_market_overview by focusing on minimum resource prices.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use guidance or differentiation from siblings like estimate_cost or get_market_overview. No mention of prerequisites or filters.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_swap_quoteGet Swap QuoteAInspect
Get a price quote and unsigned transaction for swapping tokens on SunSwap DEX. Returns an unsigned TX for client-side signing. After signing, submit via execute_swap. Requires API key.
| Name | Required | Description | Default |
|---|---|---|---|
| toToken | Yes | Token to buy — symbol or TRC-20 contract address | |
| amountIn | Yes | Amount to swap in smallest unit (SUN for TRX, base units for tokens) | |
| fromToken | Yes | Token to sell — symbol (USDT, TRX, SUN, ...) or TRC-20 contract address | |
| slippageBps | No | Slippage tolerance in basis points (default: 50 = 0.5%) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description carries full burden by disclosing 'Returns an unsigned TX for client-side signing' and auth requirement 'Requires API key'. However, it lacks details on quote expiration, rate limits, or failure modes when slippage is exceeded.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences efficiently cover: purpose, return value, workflow handoff to execute_swap, and authentication. No wasted words or repetition of schema details. Information is front-loaded with the core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers the complete multi-step workflow (quote -> sign -> submit) and platform specifics (SunSwap DEX). As there's no output schema, the description appropriately notes the return value format, though it could mention error conditions or quote validity timeframes common to DEX operations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage documenting all parameters including 'smallest unit' notation and 'basis points' semantics. Description does not need to duplicate these details; it focuses on workflow context rather than parameter specifics, which is appropriate given complete schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Get' with clear resource 'price quote and unsigned transaction for swapping tokens on SunSwap DEX'. It effectively distinguishes from sibling 'execute_swap' by stating this tool only returns unsigned transactions for client-side signing, establishing clear scope boundaries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Excellent workflow guidance: 'After signing, submit via execute_swap' explicitly directs users to the correct sibling tool for the next step. Also states prerequisite 'Requires API key', giving clear conditions for invocation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
registerRegister AgentAInspect
Start agent registration by providing your TRON address. Returns a challenge string that you must sign with your TRON private key using tronWeb.trx.signMessageV2(). After signing, call verify_registration with the signature to receive your API key. No authentication required.
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | Your TRON wallet address (T-address format, 34 characters) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description carries full burden and performs well: discloses return value (challenge string), specific signing requirements (tronWeb.trx.signMessageV2()), security context (private key usage), and prerequisite state (no authentication required). Lacks only error handling or rate limit details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four well-structured sentences: purpose, output/operation requirement, next step, and auth status. Every sentence earns its place with no redundancy. Front-loaded with clear action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema and no annotations, description adequately explains return values (challenge string) and multi-step flow. Covers sufficient complexity for agent to execute registration workflow. Minor gap: no mention of error cases or address validation failures.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with detailed parameter description (T-address format, 34 characters). Description mentions 'TRON address' but does not add semantic value beyond schema's technical constraints. Baseline score appropriate given schema completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Start') and resource ('agent registration'). Explicitly distinguishes from sibling 'verify_registration' by stating this is the initial step that returns a challenge requiring signature before verification.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly documents the workflow sequence: this tool first, then signing, then 'call verify_registration'. Provides clear when-to-use context for beginning the registration process vs completing it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
register_poolRegister Selling PoolAInspect
Register a TRON address as an energy/bandwidth selling pool on PowerSun. This creates a pool and auto-selling configuration. After registering, you must grant active permissions to the platform address (DelegateResource, UnDelegateResource, VoteWitness) so the platform can delegate resources to buyers and vote on your behalf. Use check_pool_permissions to verify permissions after granting them. Requires API key.
| Name | Required | Description | Default |
|---|---|---|---|
| autoVote | No | Auto-vote for the best Super Representative to earn rewards (default: true) | |
| sellEnergy | No | Enable energy selling (default: true) | |
| sellBandwidth | No | Enable bandwidth selling (default: false) | |
| paymentAddress | Yes | TRON address to register as pool (starts with T, 34 characters). This is the address where you hold/stake TRX. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. Explains that operation 'creates a pool and auto-selling configuration' and discloses significant platform behavior: platform will 'delegate resources to buyers and vote on your behalf' using granted permissions. Auth requirement (API key) stated. Minor gap: does not clarify if operation is idempotent, reversible, or cost implications beyond permission grants.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four efficient sentences with zero waste: (1) core purpose, (2) created artifacts, (3) critical permission requirements and platform capabilities, (4) verification step and auth prerequisite. Logical flow from action to consequences to requirements. No redundant phrases or generic filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for a registration tool with complex permission requirements. Covers the critical path: creation, required permissions, platform delegation behavior, and verification workflow. No output schema exists, but description explains what gets created (pool/configuration). Minor omission: could mention return value type (pool ID/transaction hash) or error conditions for duplicate registration.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage, establishing baseline 3. Main description references 'TRON address' and 'energy/bandwidth selling' providing contextual grouping for parameters, but does not add syntax, format constraints, or behavioral semantics beyond what schema already provides (e.g., schema already notes address starts with T, 34 chars, and staking purpose).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific purpose: 'Register a TRON address as an energy/bandwidth selling pool on PowerSun.' Identifies specific resource (TRON address), platform (PowerSun), and function (register pool). Distinguishes from sibling configure_auto_selling (which modifies existing) and check_pool_permissions (which verifies).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Excellent workflow guidance: states prerequisite 'Requires API key', mandatory post-action 'must grant active permissions' with specific permission types (DelegateResource, UnDelegateResource, VoteWitness), and explicitly references sibling tool check_pool_permissions for verification step. Clear sequencing from registration to permission granting to verification.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
trigger_voteVote for SRAInspect
Vote for the best Super Representative (SR) with the highest APY to earn voting rewards. The platform automatically selects the SR with the best return. Requires VoteWitness permission granted to the platform. Voting rewards accumulate and can be claimed automatically if auto-claim is enabled. Requires API key.
| Name | Required | Description | Default |
|---|---|---|---|
| poolAddress | No | Pool address to vote from. If omitted, uses your first registered pool. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Strong disclosure given no annotations: explains automatic SR selection algorithm (highest APY), reward accumulation mechanics, and auto-claim behavior. Also notes permission requirements. Only gap is lack of explicit transaction/return value description.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with purpose statement. Five sentences each add value (selection logic, permissions, rewards, auth). Minor stylistic choppiness with final fragment 'Requires API key.'
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a single-param tool but missing output/return value specification given lack of output schema. No annotations provided increases documentation burden, yet return format and confirmation mechanism remain undocumented.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (poolAddress fully described). Tool description mentions voting generally but does not elaborate on the parameter beyond schema. Baseline 3 appropriate when schema carries full documentation load.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb (Vote) and resource (Super Representative/SR) with distinct scope (highest APY auto-selection). Distinguished from sibling delegation/transaction tools like get_pool_delegations or broadcast_transaction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear prerequisites (VoteWitness permission, API key) and implies use case (earn voting rewards automatically). However, lacks explicit when/when-not guidance or named alternatives compared to siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
verify_registrationVerify RegistrationAInspect
Complete agent registration by verifying your signed challenge. Returns an API key and upgrades the current session to authenticated. After verification, all authenticated tools (buy_energy, get_balance, register_pool, etc.) will work in this session without needing to reconnect. No authentication required.
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | Your TRON wallet address (must match the register call) | |
| signature | Yes | Signature from tronWeb.trx.signMessageV2(challenge, privateKey) | |
| challengeId | Yes | Challenge ID from the register tool |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It successfully discloses critical side effects: returns API key, upgrades session state, and enables authenticated tools. It lists specific affected siblings. Could improve by mentioning failure modes or retry behavior, but covers happy path completely.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four tightly constructed sentences: purpose → immediate output → downstream effects → prerequisites. Each sentence earns its place. The 'etc.' efficiently handles the long sibling list without verbosity. Logical flow from action to consequence.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For an authentication-flow tool with 100% schema coverage but no output schema, the description adequately compensates by specifying the API key return and session state change. It explains the ecosystem integration (which tools become unlocked). Lacks only error-handling context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage with detailed cross-tool references (e.g., 'must match the register call'). Description provides minimal additional parameter semantics beyond 'verifying your signed challenge,' which aligns with but does not extend the schema documentation. Baseline 3 appropriate when schema is comprehensive.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Complete' with resource 'agent registration' and clearly distinguishes this from the prerequisite 'register' tool (via 'challengeId from the register tool') and from downstream authenticated tools (listing specific siblings like buy_energy that become available after).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Clearly establishes workflow position: must come after 'register' (challengeId source) and before using authenticated tools. The 'No authentication required' statement clarifies prerequisites. Missing explicit guidance on when NOT to use (e.g., if already authenticated).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
withdraw_earningsWithdraw EarningsAInspect
Withdraw TRX from your account balance to your wallet. Minimum withdrawal: 100 TRX. Withdrawal is processed on-chain and may take a few minutes.
| Name | Required | Description | Default |
|---|---|---|---|
| amount | Yes | Amount to withdraw in TRX (minimum 100 TRX) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds valuable latency context ('processed on-chain', 'may take a few minutes') that annotations would otherwise not cover. However, as a financial transfer tool with zero annotations, it omits crucial behavioral details like transaction fees, irrevocability warnings, and failure/reversal policies.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero redundancy: action statement, hard constraint, and processing timing. Every clause delivers distinct information necessary for invocation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Reasonably complete for a single-parameter operation, covering constraints and timing. However, lacking an output schema, it should ideally describe what the tool returns (e.g., transaction hash, confirmation status) and lacks financial risk disclosures typical for withdrawal operations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage describing the amount constraint. Description adds semantic context that funds move 'to your wallet' (implied destination), though it could clarify if the wallet is pre-configured or specified elsewhere.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Withdraw'), resource ('TRX'), source ('account balance'), and destination ('your wallet'). Distinguishes from siblings like get_earnings or get_balance by implying a transactional transfer rather than information retrieval, though it doesn't explicitly name alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides the critical constraint 'Minimum withdrawal: 100 TRX' which governs when the tool can be invoked. However, lacks explicit guidance on when to choose this over holding funds or interacting with other pool/financial tools in the sibling list.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.