Tronsave MCP Testnet
Server Details
Testnet TronSave MCP server focused on helping agents and clients buy and sell TRON resource quickly through one unified interface, with fast order execution, pricing/estimation tools, and secure session-based workflows.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 23 of 23 tools scored. Lowest: 2.6/5.
Each tool targets a distinct operation or resource with clear boundaries. Order management, auto-settings, auth, API keys, account info, market data, and internal extensions are all well-separated. No two tools have overlapping purpose.
All tools follow a consistent tronsave_verb_noun pattern in snake_case, making it predictable. Internal tools add an 'internal_' prefix for grouping. No mixing of conventions.
23 tools cover the full feature set of an exchange/delegation platform without being excessive. Each tool provides necessary functionality for orders, auto-settings, auth, and account management.
Covers order lifecycle (CRUD + cancel/sell) and auto-sell settings, but lacks tools for auto-buy settings creation and listing, despite having a delete_auto_buy_setting tool. This is a notable gap in the automation surface.
Available Tools
23 toolstronsave_cancel_orderCancel OrderBInspect
Cancel an existing open order by orderId. Side effect: the order status changes to cancelled and it is no longer matchable; this action is typically irreversible. Use only when the user explicitly wants to stop the order; prefer tronsave_update_order when changing price/receiver, and call tronsave_get_order first to confirm current state. Requires a valid signature session and mcp-session-id; cancellation may fail for already-fulfilled, already-cancelled, or unauthorized orders.
| Name | Required | Description | Default |
|---|---|---|---|
| orderId | Yes | Target order id (`MObjectId`) to cancel. Must be an active order owned/authorized by the current session; already-fulfilled or already-cancelled orders are expected to fail. |
Output Schema
| Name | Required | Description |
|---|---|---|
| code | Yes | |
| message | Yes | |
| success | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist. Description only states the cancel action and requirement, without disclosing side effects (e.g., refunds, status changes), reversibility, or error scenarios. Does not compensate for missing annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with the main verb and resource. No extraneous information. Very efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Sufficient for a simple cancel operation with one required parameter. However, lacks any mention of output or confirmation, and no annotations provide extra context. Adequate but not thorough.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage with a description for orderId. The description adds no new meaning beyond 'by orderId'. Baseline score of 3 appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the action (cancel) and resource (order) with specific identifier (orderId). Differentiates well from sibling tools like 'update_order' or 'get_order'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Mentions prerequisites (valid signature session and mcp-session-id) but provides no guidance on when to use this tool vs alternatives or conditions where cancellation is allowed. Lacks exclusions or context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tronsave_create_orderCreate OrderAInspect
Create an order with a flat input schema and selected paymentMethod (onchain, internal, pending). orderUnitPrice uses SUN units (not TRX). For onchain, paymentSignedTx is required; for pending, paymentPendingRequester is optional. pending creates a pending order: payment is done by depositing to the system depositAddress (and depositAmount) returned in the response data, not by passing a signed transaction in this call. Side effect: creates a new live order that can be matched/executed by the market.
| Name | Required | Description | Default |
|---|---|---|---|
| orderReceiver | Yes | TRON base58 address (starts with T) that receives delegated resource. Same as `receiver` passed to `tronsave_estimate_buy_resource`. | |
| paymentMethod | Yes | Payment method for this order: `onchain` (wallet signed tx), `internal` (internal balance), or `pending` (deposit later flow). | |
| orderUnitPrice | Yes | Unit price in SUN. Prefer aligning with `minResourcePrice` or `buyResourcePrice` from the latest `tronsave_estimate_buy_resource` for the same receiver, amount, and duration. | |
| paymentSignedTx | No | Required when `paymentMethod=onchain`. Serialized signed TRON transaction object (GraphQL JSON). Produced only by wallet/client after user signs. | |
| orderDurationSec | Yes | Delegation duration in seconds. Must match the `durationSec` used in `tronsave_estimate_buy_resource`. | |
| orderResourceType | Yes | Resource to buy: ENERGY (smart contracts) or BANDWIDTH (transactions). Must match `tronsave_estimate_buy_resource`. | |
| orderResourceAmount | Yes | Amount of ENERGY or BANDWIDTH units to purchase. Same as `buyResourceAmount` in estimate when applicable. | |
| paymentPaymentAmount | Yes | Payment amount in SUN. Prefer deriving from `tronsave_estimate_buy_resource`; if converting to TRX, divide SUN by `1e6`. | |
| paymentPendingRequester | No | Optional requester hint for `pending` payment flow when API expects it. |
Output Schema
| Name | Required | Description |
|---|---|---|
| code | Yes | |
| data | No | |
| message | Yes | |
| success | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations present, so description must cover behaviors. Describes prerequisites and conditional requirements but lacks side effects, rate limits, or failure behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with action and key differentiator (payment methods), then conditional details. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 9 parameters with conditional logic, description covers purpose, prerequisites, and parameter relationships. Output schema exists but not needed in description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, baseline 3. Description adds meaning by linking parameters to tronsave_estimate_buy_resource, explaining unit conversion (SUN to TRX), and clarifying conditional requirements.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Create an order' with specific details on payment methods and references to estimation tool. Distinguishes from sibling tools like tronsave_cancel_order and tronsave_update_order.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
States requirement for signature session and mcp-session-id, and conditional fields per payment method. Could explicitly mention when not to use or list alternatives, but context implies proper usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tronsave_delete_auto_buy_settingDelete Auto Buy SettingAInspect
Delete one auto-buy setting permanently by id (MObjectId). Side effect: removes that rule and stops future executions from it. Requires a valid signature session and mcp-session-id. Use only when the user explicitly wants permanent removal; prefer update/disable flows when reversible behavior is desired. Deletion can fail for missing ids, unauthorized sessions, or already-removed settings.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Auto-buy setting id (`MObjectId`) to delete. |
Output Schema
| Name | Required | Description |
|---|---|---|
| code | Yes | |
| message | Yes | |
| success | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description mentions permanent deletion and authentication requirements, which are important behavioral traits. However, it does not disclose any side effects or cascade behaviors, and there are no annotations to supplement. Moderate transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two concise sentences, front-loading the primary action and then specifying requirements. No filler or repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity and the presence of an output schema, the description covers the essentials: what it does, the required parameter, and authentication. It omits error handling or irreversibility emphasis, but overall complete enough.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds minimal value beyond the schema by restating the parameter's role and type (MObjectId), but does not provide new semantic context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (delete), the resource (auto-buy setting), and the identifier (id). It differentiates from sibling tools by specifying 'auto-buy' vs 'auto-sell' or other resources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a prerequisite ('requires a valid signature session and mcp-session-id') but does not indicate when to use this tool versus alternatives like updating or registering a setting. No explicit when-not-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tronsave_estimate_buy_resourceEstimate Buy ResourceAInspect
Read-only quote endpoint for pre-trade estimation (no side effects): returns pricing/availability used to prepare orders. No signature session is required for basic estimate calls, but backend authorization/rate limits may still apply. Use this before tronsave_create_order to set orderUnitPrice and payment planning; use tronsave_list_order_books for depth snapshots instead of quote math. Invalid address/resource combinations can return errors or low-availability results.
| Name | Required | Description | Default |
|---|---|---|---|
| receiver | No | receiver address. | |
| unitPrice | No | Unit price in SUN. | |
| durationSec | No | Duration of the order in seconds. Default 15 minutes | |
| resourceType | No | Resource type to buy. Default ENERGY | ENERGY |
| allowPartialFill | No | Allow partial fill of the order. | |
| buyResourceAmount | No | Amount of resource to buy. Default 100000 for ENERGY and 1000 for BANDWIDTH | |
| minResourceDelegateRequiredAmount | No | Minimum amount of resource to delegate. |
Output Schema
| Name | Required | Description |
|---|---|---|
| unitPrice | Yes | Unit price in SUN |
| durationSec | Yes | Delegated duration in seconds |
| estimateTrx | Yes | Estimated TRX cost in SUN |
| availableResource | Yes | Available resource amount |
| systemDepositAddress | Yes | System deposit address for onchain payment |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Mentions it's an estimation and returns pricing data, implying non-destructive. Could explicitly state read-only nature but does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose, each sentence adds value without redundancy. Efficiently communicates core information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers purpose and output fields well but misses parameter explanations and does not mention authentication, rate limits, or other constraints. Adequate for a simple estimation tool but not comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is only 17%, and the description does not explain input parameters (e.g., receiver, unitPrice, durationSec). It only describes output fields, failing to compensate for low schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool estimates buy-resource pricing, returns specific pricing fields, and directs use before order creation tools. It effectively distinguishes from sibling order tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says to use before any tronsave_order_create* tool, and guides how to apply the output (choose unitPrice and deposit hints). Provides clear context for when to invoke.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tronsave_generate_api_keyGenerate API KeyAInspect
Generate a new internal API key credential for the current user. Side effect: issues secret material that grants internal-tool access. Requires a valid signature session and mcp-session-id. Use when the user needs fresh internal credentials; store returned data securely and use it with tronsave_login (apiKey mode). Treat this as sensitive output; unauthorized sessions or backend policy checks may reject issuance.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| code | Yes | |
| data | Yes | |
| message | Yes | |
| success | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description warns to store securely and states it returns raw key, but lacks details on regeneration or invalidation of previous keys.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences, structured with no waste, front-loaded with purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given an output schema exists, description adequately covers return value (raw key) and security note, though could mention one-time nature.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, so description correctly adds no param info. Baseline 4 for zero parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific verb 'Generate' and resource 'internal API key for the current user', and distinguishes from the sibling tool tronsave_revoke_api_key.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly tells when to use the generated key (in tronsave_login with apiKey) and mentions it is for internal-tool authentication, though could add more about when not to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tronsave_get_deposit_addressGet Deposit AddressAInspect
Fetches the specific deposit address for the TronSave internal account. Requires a logged-in MCP session created by the tronsave_login tool: include mcp-session-id: <sessionId> returned by tronsave_login on subsequent MCP requests. Internal tools never accept API keys via tool arguments; the server forwards the API key cached in session to TronSave internal REST endpoints. Trigger this tool if the user asks for a deposit address or needs to top up their TronSave TRX balance. Constraints: 1) TRX only; 2) Minimum deposit amount is 10 TRX; 3) Read-only operation.
| Name | Required | Description | Default |
|---|---|---|---|
| amountTrx | Yes | Amount of TRX to deposit |
Output Schema
| Name | Required | Description |
|---|---|---|
| amountTrx | Yes | Amount of TRX to deposit |
| depositAddress | Yes | TRON base58 deposit address, used for deposit TRX to TronSave internal account |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses read-only nature, session requirement, and constraints. No annotations provided, so description carries full burden; lacks details on output format but output schema exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Concise and well-structured: purpose first, then guidelines, constraints in bullet points. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers prerequisites, triggers, and constraints. Output schema exists, so return values need not be described. Missing minor details like error conditions, but sufficient for a simple tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, baseline 3. Adds value by specifying 'TRX only' constraint not in schema, reinforcing minimum amount from schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'fetches' and the specific resource 'deposit address for the TronSave internal account', distinguishing it from sibling tools that handle other operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states prerequisites (logged-in MCP session from tronsave_login), triggers (user asks for deposit address or top-up), and constraints (TRX only, min 10 TRX, read-only).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tronsave_get_internal_accountInternal Account InformationAInspect
Retrieve the Tronsave internal account profile for the current logged-in session (wallet/represent address, balances, deposit address). Requires a logged-in MCP session created by the tronsave_login tool: include mcp-session-id: <sessionId> returned by tronsave_login on subsequent MCP requests. Internal tools never accept API keys via tool arguments; the server forwards the API key cached in session to TronSave internal REST endpoints. Use when the user needs their linked address or balance. Read-only; does not submit orders or change chain state.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| id | Yes | |
| balance | Yes | Amount in SUN |
| depositAddress | Yes | TRON base58 deposit address, used for deposit TRX to TronSave internal account |
| representAddress | Yes | TRON base58 represent address |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the burden. It states 'Read-only; does not submit orders or change chain state' and explains authentication via session and internal API key handling. Could be more detailed on rate limits or response format, but sufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is four sentences, front-loaded with purpose and key details, with no redundant information. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with zero parameters and an output schema, the description covers purpose, usage context, required prerequisites, and behavioral safety. It is complete given the low complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, so schema coverage is 100% and the description adds no param info. Baseline for 0 params is 4. Description does not need to add more.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Retrieve the Tronsave internal account profile' and specifies the data included (wallet/represent address, balances, deposit address). It distinguishes itself from siblings like tronsave_get_deposit_address by offering a comprehensive profile.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Use when the user needs their linked address or balance' and notes the session requirement from tronsave_login. It does not contrast with specific sibling tools, but the read-only note helps differentiate from mutation tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tronsave_get_min_priceGet Minimum Unit PriceAInspect
Read-only market quote: returns GraphQL market.estimateMinPrice as minPrice (SUN per resource unit) for a given resourceType, buyAmount, and durationSec. Optional address scopes context when the API supports it. No login required; optional session forwards auth like tronsave_list_order_books. Use with tronsave_estimate_buy_resource for full buy quotes and tronsave_list_order_books for depth buckets.
| Name | Required | Description | Default |
|---|---|---|---|
| address | No | Optional TRON base58 address (`TronAddress`). When omitted, the server uses anonymous/global market context. | |
| buyAmount | No | Resource amount to price (same units as order book / delegate amounts for that resource type). Default 100000 for ENERGY and 1000 for BANDWIDTH | |
| durationSec | No | Delegation window in seconds; must match the duration you plan to use on a real order for comparable quotes. Default 15 minutes | |
| resourceType | No | Market leg: `ENERGY` or `BANDWIDTH`. Default ENERGY | ENERGY |
Output Schema
| Name | Required | Description |
|---|---|---|
| minPrice | Yes | Estimated minimum unit price in SUN for the given `resourceType`, `buyAmount`, and `durationSec`; align with `tronsave_estimate_buy_resource` and order `unitPrice` conventions (same as GraphQL `market.estimateMinPrice`). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It declares 'read-only' and describes the return field, but lacks explicit statements about side-effects (none expected) or auth requirements beyond optional session. Adequate but not exhaustive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with purpose, concise at two sentences. However, it could be split for readability. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations but presence of output schema, description sufficiently covers the tool's behavior. It identifies the GraphQL field and constraints, making it complete for a price query tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but description adds context: explains default values, unit semantics (SUN), and requirement that durationSec must match real orders. Provides more meaning than raw schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool is a read-only market quote returning minPrice for given parameters. It distinguishes itself from siblings like tronsave_estimate_buy_resource and tronsave_list_order_books, specifying its unique role.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit guidance on when to use: for read-only price queries, optionally scoped by address. Recommends pairing with other tools for full buy quotes and order depth, and notes no login required but optional auth forwarding.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tronsave_get_orderOrder DetailAInspect
Read a single order by id to verify current state before taking actions. Returns compact order fields for NORMAL, FAST, and EXTEND types. Use this as the source of truth before tronsave_update_order, tronsave_sell_order_manual, or tronsave_cancel_order.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Target order id (`MObjectId`) to inspect. Use this to confirm current status/price before update, sell, or cancel actions. |
Output Schema
| Name | Required | Description |
|---|---|---|
| order | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the return type ('compact order information') and supported types, which adds some value. However, it omits details like whether the operation is read-only, idempotent, requires authentication, or what happens on missing ID. Given the simplicity of a get operation, the transparency is adequate but not exhaustive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences: first states the core action, second specifies return content and applicable order types. No superfluous words, front-loaded with the essential purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that an output schema exists, the description need not explain return values. It covers the primary purpose, applicable order types, and the identifier. It lacks only minor context like error handling or prerequisites, which makes it very complete for a simple query tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already defines the 'id' parameter with the same description ('Order id (`MObjectId`).'). The description repeats this information without adding new meaning. Per the rule (coverage >80% baseline 3), the score is 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get details'), the resource ('single order'), and the scope ('NORMAL, FAST, and EXTEND order types'), distinguishing it from sibling tools like 'tronsave_list_orders' or 'tronsave_cancel_order'. The verb+resource combination is precise.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. For example, it doesn't mention that for listing multiple orders one should use 'tronsave_list_orders', or that for order creation one should use 'tronsave_create_order_onchain'. The description only states what the tool does, not when to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tronsave_get_sign_messageGet Signature MessageAInspect
Get the nonce message used for wallet-based authentication. Sign this message client-side and submit the signature in tronsave_login to create a signature session.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| message | Yes | The message to sign. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; description carries full burden. Only states it's a get operation without disclosing potential side effects, rate limits, or nonce expiration behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, first states purpose, second gives actionable instructions. No wasted words, front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 0-parameter tool with an output schema, the description covers the entire usage context: what it does, what to do with the result, and which sibling to use next.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 0 parameters so no parameter info needed. Description adds value by explaining the purpose and usage beyond the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the verb 'Get', the resource 'nonce message', and the purpose 'wallet-based authentication'. Distinguishes from sibling by linking to the next step in the flow.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit instructions: sign client-side and submit in tronsave_login. Lacks explicit when-not-to-use, but the context of authentication flow is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tronsave_get_user_auto_settingUser Auto SettingAInspect
Read-only endpoint returning the current user's auto-sell configuration (autoSettings) with no side effects. Requires a valid signature session and mcp-session-id. Use this before tronsave_register_auto_sell or tronsave_update_auto_sell_setting to avoid overwriting unknown fields; unauthorized or expired sessions will fail.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| vote | No | |
| energy | No | |
| bandwidth | No | |
| suggestSell | No | |
| withdrawVote | No | |
| permitOperations | No | |
| reclaimOnlyTronSave | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; description only states it gets configuration, omitting any behavioral details (e.g., read-only nature, rate limits, required permissions).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences with no fluff. Provides essential information in minimal space.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers what the tool does and a prerequisite. Output schema exists, so return values are not needed. Minor gap: no explicit hint that it's safe/read-only.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters in schema, so baseline 4 applies. Description adds no param info, which is acceptable as none exist.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states verb ('Get'), resource ('auto-sell configuration'), and scope ('current authenticated user'). Distinguishes from sibling tools like 'tronsave_register_auto_sell' and 'tronsave_update_auto_sell_setting' as a read operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Prerequisite mentioned ('valid signature session and mcp-session-id') but no guidance on when to use this tool vs alternatives, e.g., when to read vs update.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tronsave_get_user_infoUser InformationAInspect
Get the current authenticated user's profile and account settings. Requires a valid signature session from tronsave_login and mcp-session-id in request headers. Wallet signing always happens client-side; never send private keys to the server.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| info | Yes | |
| caller | Yes | TRON base58 address of the caller |
| address | Yes | TRON base58 address |
| balance | Yes | Amount in SUN, onchain balance of the address |
| internalAcc | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the burden. It discloses authentication requirements (signature session, mcp-session-id) and a security warning (no private keys to server). It does not explicitly state read-only nature, but it's implied. Good disclosure for a read operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: first states purpose, second provides prerequisites and security warning. No fluff, every sentence earns its place. Well-fronted and appropriately sized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with zero parameters, clear description, and an output schema (not shown but exists), the description is complete. It covers purpose, prerequisites, and security – all needed for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters and 100% coverage, so baseline is 4. The description adds value by stating the requirement for an `mcp-session-id` header, which is not a parameter but crucial context. This extra clarity elevates the score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool retrieves the authenticated user's profile and account settings. It specifies the resource ('user info') and the condition (authenticated). Among sibling tools focused on orders, keys, deposits, etc., this is the only user info tool, so no sibling differentiation needed.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implicitly says when to use: after `tronsave_login` and with a valid `mcp-session-id` header. It also warns against sending private keys. However, it does not explicitly state when not to use or list alternatives, though no alternatives exist for this operation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tronsave_get_user_permissionsUser PermissionsAInspect
Read-only endpoint returning enabled permission operations (autoSettings.permitOperations) for the authenticated user. Requires a valid signature session and mcp-session-id. Use this before auto-sell mutations to confirm allowed operations; unauthorized/expired sessions can fail.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| permitOperations | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. 'Get' implies read-only; mentions authentication requirements. No side effects disclosed, but minimal for a read operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no redundancy. First sentence states purpose and details, second states requirements. Every word necessary.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters, no annotations, and presence of output schema, description adequately covers purpose, requirements, and return semantics.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters in schema (100% coverage). Description adds value by specifying what is retrieved (permission list and the specific field `autoSettings.permitOperations`).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb+resource: 'Get the current authenticated user's enabled permission list' with specific field name. Distinguishes from many sibling tools that deal with orders, settings, etc.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
States prerequisites: 'Requires a valid signature session and `mcp-session-id`.' Does not explicitly exclude alternative tools, but context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tronsave_internal_create_extend_requestSubmit Delegation ExtensionAInspect
Submit an extension request for existing delegated resources on TronSave via authenticated REST POST /v2/extend-request. Requires a logged-in MCP session created by the tronsave_login tool: include mcp-session-id: <sessionId> returned by tronsave_login on subsequent MCP requests. Internal tools never accept API keys via tool arguments; the server forwards the API key cached in session to TronSave internal REST endpoints. Side effect: creates an extension order and may commit TRX from the internal account. extendData must follow the REST contract (see schema on each row). Populate it from TronSave outside this MCP—for example the authenticated POST /v2/get-extendable-delegates response field extendData, or another TronSave client. Do not copy rows blindly from tronsave_list_extendable_delegates (GraphQL); that payload shape differs and is for market discovery only.
| Name | Required | Description | Default |
|---|---|---|---|
| receiver | Yes | TRON base58 address of the account that receives the delegated resource. | |
| extendData | Yes | Delegator-level rows for `POST /v2/extend-request`. Each row matches TronSave’s REST extend contract (delegator, isExtend, extraAmount, extendTo ms)—typically taken from `extendData` returned by `POST /v2/get-extendable-delegates` when called with the user’s session, not from GraphQL `tronsave_list_extendable_delegates`. | |
| resourceType | No | Resource type. Default ENERGY when omitted. |
Output Schema
| Name | Required | Description |
|---|---|---|
| orderId | Yes | TronSave order ID (hex string) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full burden. It discloses that the tool 'may commit TRX' and describes session/API key handling. While it doesn't elaborate on exact conditions for TRX commitment or error scenarios, it provides sufficient behavioral context beyond the schema. A higher score would require more precise details on side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is four sentences long, each serving a purpose: purpose statement, session requirement, API key handling, and workflow step. It is front-loaded with the essential information and contains no redundant or verbose content. Highly efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that an output schema exists, the description does not need to explain return values. It covers purpose, prerequisites, workflow integration, parameter origin, and behavioral implications (TRX commitment). Every aspect an agent needs to invoke the tool correctly is addressed, making it fully complete for the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds value by explaining that extendData should be passed unchanged from internal.extend.delegates and that resourceType defaults to ENERGY when omitted. This goes beyond schema descriptions, aiding correct parameter usage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb and resource: 'Submit an extension request for existing delegated resources on TronSave'. It also distinguishes itself from siblings by positioning it as step 2 after internal.extend.delegates, and mentions it may commit TRX. This is specific and unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'Use as step 2 after internal.extend.delegates; pass extendData from that response unchanged.' It also specifies prerequisites: requires a logged-in MCP session from tronsave_login, and notes that internal tools do not accept API keys via arguments. This provides clear guidance for correct usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tronsave_list_extendable_delegatesExtendable DelegatesCInspect
Read-only: lists extendable delegate candidates for a receiver and resourceType (ENERGY|BANDWIDTH); optional suggestData scores an extend-and-buy scenario. Does not create orders or change on-chain state. Works without mcp-session-id; when a session is present, auth is forwarded so results can reflect the logged-in account where supported. Input and output field meanings are documented on the schemas.
| Name | Required | Description | Default |
|---|---|---|---|
| receiver | Yes | TRON base58 address (typically starts with `T`) that currently receives or will receive the delegated resource. This is the primary filter: the response describes delegates relevant to extending that account's delegation. | |
| requester | No | Optional TRON base58 address of the viewing/requesting party. When an MCP session is active, the server may still derive identity from auth headers; use this when the API expects an explicit requester string distinct from the receiver. | |
| suggestData | No | Optional nested scenario for extend-and-buy suggestion scoring. If omitted, the query returns delegate rows without that hypothetical. If provided, supply all four fields together; partial objects are invalid for typical GraphQL input shapes. | |
| resourceType | Yes | Which resource market leg to query: `ENERGY` for contract execution headroom, `BANDWIDTH` for transaction bandwidth. Must match how you plan to extend or buy. |
Output Schema
| Name | Required | Description |
|---|---|---|
| extendableDelegates | Yes | Null when the market has no payload or GraphQL returned no branch; check tool-level errors for hard failures. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It does not disclose whether the operation is read-only, idempotent, or has any side effects, permissions, or rate limits. This is a significant gap for a tool that likely involves fetching data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, no fluff, and front-loaded with the core action. However, it could be improved by structuring parameter explanations more clearly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 4 parameters including a complex nested object, and no annotations, the description is incomplete. It lacks behavioral details and parameter semantics, leaving an agent underinformed despite an output schema being present.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description mentions 'receiver address', which corresponds to the 'receiver' parameter, but does not explain 'requester', 'suggestData' (a complex nested object), or 'resourceType' (which has an enum). With 0% schema description coverage, the description fails to add meaning beyond the raw schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (get) and resource (extendable delegate candidates) and specifies the context (for a receiver address). However, it does not distinguish from the sibling 'tronsave_internal_list_extendable_delegates', which likely serves a similar purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a usage hint ('Useful when selecting delegates that can be extended for resource operations'), implying when to use it, but no explicit guidance on when not to use it or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tronsave_list_order_booksOrder BookCInspect
Read-only market depth endpoint (no side effects). Returns price buckets (min, max, value) for ENERGY or BANDWIDTH, optionally scoped by viewer address, minimum delegate amount, and duration. Use this to estimate market ranges before create/update decisions; use tronsave_estimate_buy_resource for quote-style buy estimation and tronsave_get_order for one concrete order.
| Name | Required | Description | Default |
|---|---|---|---|
| address | No | Optional viewer/requester wallet context. | |
| durationSec | No | Optional delegation duration filter in seconds. | |
| resourceType | Yes | Order-book side to query (`ENERGY` or `BANDWIDTH`). | |
| minDelegateAmount | No | Optional minimum delegate amount floor for bucket filtering. |
Output Schema
| Name | Required | Description |
|---|---|---|
| orderBook | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It only mentions returns but does not disclose if the operation is read-only, side effects, permissions needed, or rate limits. For a market data tool, this is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with key information: purpose, resource types, return format, and optional filters. It is efficient but could be structured for better scannability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 4 parameters, no annotations, and an output schema, the description is partially complete. It covers main function and filters but omits the address parameter and does not elaborate on filter behavior or output schema structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is low (25%), but the description adds context for resourceType (ENERGY/BANDWIDTH) and names optional amount and duration filters (mapping to minDelegateAmount and durationSec). However, it does not explain the address parameter or the precise effect of the filters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool gets market order book data for ENERGY or BANDWIDTH and specifies return format (price buckets). It implicitly differentiates from sibling tools like tronsave_list_orders by focusing on order books vs. orders.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool over alternatives or conditions for optional filters. The description does not mention when to apply amount/duration filters or any prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tronsave_list_ordersList OrdersCInspect
Read-only; output is { orders: [...] } where each row is a compact order: id, optional requester (address), receiver.address, resourceType, resourceAmount, remainAmount, durationSec, unitPrice (SUN), isOwner, isMatching, apy, createdAt, typeOrder (NORMAL|FAST|EXTEND). Input status maps to GraphQL isFulfilled (ACTIVE→UNFULFILLED, COMPLETED→FULFILLED). onlyMyOrder requires signature login + mcp-session-id and scopes to that wallet as requester. Paging via offset/limit only; use tronsave_get_order with an id for one full snapshot before changing an order.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum rows to return in one call. Prefer modest limits to avoid huge payloads; page with `offset`. | |
| offset | No | Zero-based row offset for page N (skip first `offset` matches). Omit when starting from the first page. | |
| status | No | Lifecycle filter mapped to GraphQL `isFulfilled`: `ACTIVE` → `UNFULFILLED` (not yet fulfilled), `COMPLETED` → `FULFILLED`. | |
| resourceType | No | Restrict to `ENERGY` or `BANDWIDTH` orders; omit for both. | |
| isOnlyMyOrder | No | When `true`, only orders whose `requester` is the wallet from the signature session; include `mcp-session-id`. Api-key-only sessions have no wallet address and cannot use this filter. |
Output Schema
| Name | Required | Description |
|---|---|---|
| orders | Yes | Matching orders for this page; may be large—use pagination inputs to chunk. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided. The description does not disclose if this is a read operation, authentication requirements, rate limits, or data freshness. It implies listing but omits behavioral traits like side effects or scope.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence of 16 words is concise but over-truncated. Front-loaded with main action, but lacks structured breakdown or examples. Sacrifices completeness for brevity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 10 parameters, no annotations, and an output schema, the description is too sparse. Does not cover full filter set, pagination mechanics, or response structure. Incomplete for a complex list endpoint.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 10 parameters with 0% description coverage. Description only mentions four filters (resourceType, isOwner, isMatching, sort options) but ignores others like limit, offset, fromId, receivers, isFulfilled, myAddress. Does not add meaning beyond what the schema minimally provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get order history with pagination and common filters', identifying the resource (order history) and key features (pagination, filters). It distinguishes from siblings like tronsave_get_order (single order) and tronsave_create_order (creation). However, it could be more specific about listing vs history.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like tronsave_get_order (for a single order) or tronsave_cancel_order. No context about prerequisites, owner scope, or intended use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tronsave_loginLogin SessionAInspect
Create an authenticated server session and return a sessionId for subsequent tool calls. Default mode is wallet signature login for platform tools; secondary mode is apiKey login for internal tools. For wallet login, ALWAYS call tronsave_get_sign_message first, sign that exact message client-side, then call tronsave_login with signature_timestamp in exact format <signature>_<timestamp> (signature and timestamp joined by _). Use returned sessionId as mcp-session-id on every subsequent request.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| expiresAt | Yes | |
| sessionId | Yes | Session id to pass as mcp-session-id in subsequent calls. |
| walletAddress | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description covers authentication modes, return value (sessionId), and required input format. It does not mention session expiration or error handling, but overall provides good behavioral insight.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single dense paragraph with clear structure: purpose, modes, procedure. Every sentence adds value with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers prerequisites, modes, output usage, and required format. Lacks error handling details but is complete for authentication setup. Output schema exists to clarify return structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with parameter descriptions, but the description adds value by explaining the relationship between apiKey and signature (mutually exclusive modes) and specifying the exact concatenation format for the signature parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool creates an authenticated server session and returns a sessionId, distinguishing two authentication modes (wallet signature and apiKey). This differentiates it from sibling tools like tronsave_get_sign_message or order-related tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly describes when to use wallet login (platform tools) vs apiKey login (internal tools), and provides step-by-step instructions for wallet login including prerequisite call, message signing, and required signature format.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tronsave_register_auto_sellRegister Auto SellAInspect
Create initial auto-sell configuration for the authenticated user. Side effect: persists automation settings that affect future delegation/sell behavior. Requires a valid signature session and mcp-session-id. Use this for first-time setup; use tronsave_update_auto_sell_setting for subsequent edits, and read current state with tronsave_user_auto_setting_get first. Mutation can fail for invalid config combinations, unauthorized sessions, or backend policy restrictions.
| Name | Required | Description | Default |
|---|---|---|---|
| poolSetting | No | ||
| addOnFeature | No | ||
| paymentConfig | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| code | Yes | |
| data | Yes | |
| message | Yes | |
| success | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must bear all behavioral disclosure. It only states 'Register' without explaining side effects, idempotency, permissions, or what happens if settings already exist. The output schema exists but is not described, leaving agents uninformed about return behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence plus a brief note on related tools. It is concise and front-loads the key purpose. However, it could be slightly more structured by separating the usage guidelines into a bullet or separate sentence for clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers purpose and usage alternatives, but lacks details on return value, error handling, prerequisites, and behavioral nuances like idempotency. Given the tool's complexity (3 top-level parameters with nested objects) and absence of annotations, the description is incomplete for fully informed invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 0% schema description coverage (context signals), the description groups parameters into high-level categories (paymentConfig, poolSetting, addOnFeature) and hints at suboptions (e.g., energy/bandwidth limits, vote, stake, suggestSell). However, it lacks detailed semantics for individual fields, which the schema does provide but is not reflected in the coverage metric. The description adds some meaning but insufficient for a complex nested schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool registers auto-sell settings for the current user, lists the optional parameter groups (paymentConfig, poolSetting, addOnFeature), and distinguishes from siblings by mentioning alternative tools for checking and updating settings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly advises checking current settings with tronsave_user_auto_setting_get before registration, and notes that later changes should use tronsave_update_auto_sell_setting. This provides clear usage guidance and distinguishes when to use this tool vs siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tronsave_revoke_api_keyRevoke API KeyAInspect
Revoke the current internal API key immediately. Side effect: existing internal-tool access that depends on that key can stop working. Requires a valid signature session and mcp-session-id. Use when rotating credentials or responding to key exposure; call tronsave_generate_api_key afterwards if continued internal access is needed. Operation is effectively destructive for the old key and may fail for unauthorized sessions.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| code | Yes | |
| data | Yes | |
| message | Yes | |
| success | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Despite no annotations, the description thoroughly discloses that revocation is immediate, existing sessions stop working, and a new key must be generated. This is comprehensive for a destructive action.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, concise and front-loaded with the main action. Every sentence adds value with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and presence of output schema, the description is complete. It covers the action, effects, and required follow-up steps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, so schema coverage is 100%. The description adds no parameter info, but none is needed. Baseline 4 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action: revoke the current internal API key. It is specific and distinguishes from siblings like tronsave_generate_api_key.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains the immediate effect and necessary follow-up (generate new key and login), providing good context for when to use this tool. However, it does not explicitly mention when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tronsave_sell_order_manualSell Order ManualAInspect
Manually execute seller-side fulfillment using orderId plus wallet signedTx. Side effect: executes a market/delegation transaction and may consume balances/resources. Use only when explicit manual sell is intended; call tronsave_get_order first to verify order state before signing.
| Name | Required | Description | Default |
|---|---|---|---|
| orderId | Yes | Target order id (`MObjectId`) to manually fulfill. Should be an order eligible for seller-side manual execution. | |
| signedTx | Yes | Required wallet-signed transaction payload (GraphQL JSON scalar). Must come from client-side signing; never fabricate. | |
| paymentAddress | No | Optional payout/payment address override when settlement flow requires it. |
Output Schema
| Name | Required | Description |
|---|---|---|
| code | Yes | |
| message | Yes | |
| success | Yes | |
| delegatedId | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist; description discloses manual fulfillment and requirement for signed delegate transaction but does not detail side effects, idempotency, or reversibility.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences efficiently convey purpose and prerequisite, though could be slightly more streamlined.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given output schema exists, description covers key behaviors; could mention success/failure indication or permission requirements.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Adds context beyond schema: orderId from listing, signedTx as delegate, and optional paymentAddress purpose.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool manually fulfills a sell order using orderId and signedTx, distinguishing it from siblings like cancel or create orders.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly instructs to call tronsave_order_detail first, providing a prerequisite step for correct usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tronsave_update_auto_sell_settingUpdate Auto Sell SettingAInspect
Update existing auto-sell configuration with partial fields. Side effect: overwrites stored automation settings for the current user. Requires a valid signature session and mcp-session-id. Use this for incremental changes after registration; read baseline via tronsave_user_auto_setting_get to avoid accidental resets, and use tronsave_register_auto_sell only for first-time setup. Mutation can fail for invalid field combinations, unauthorized sessions, or policy constraints.
| Name | Required | Description | Default |
|---|---|---|---|
| poolSetting | No | ||
| addOnFeature | No | ||
| paymentConfig | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| code | Yes | |
| data | Yes | |
| message | Yes | |
| success | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It indicates mutation ('Update') and partial updates ('partial fields'), and advises to read current values first. However, it does not disclose what happens to unspecified fields (assumed unchanged), required permissions, or side effects. Adequate but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (two sentences) and front-loaded with the main purpose. No unnecessary words. However, it could be slightly more informative about the partial update behavior.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of the input schema (3 nested objects, many fields), the description is short. It references registration and the get tool for context, but does not explain the output schema or explicitly state that all fields are optional and unchanged fields remain as-is. Partially complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It only mentions that the tool supports the same optional inputs as registration and lists parameter names (`paymentConfig`, `poolSetting`, `addOnFeature`), but does not add meaning beyond the schema property names. No details on field types or behaviors are provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Update the current user's auto-sell configuration with partial fields', indicating a specific action (update) and resource (auto-sell configuration). It distinguishes from sibling tools like `tronsave_register_auto_sell` (registration) and `tronsave_user_auto_setting_get` (read).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description advises to read current values with `tronsave_user_auto_setting_get` before submitting partial updates, providing a clear when-to-use guidance. It also mentions that the tool supports the same inputs as registration, linking to a sibling tool. However, it does not explicitly state when not to use this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tronsave_update_orderUpdate OrderAInspect
Update an existing open order by orderId (partial fields: receiver, newPrice). Use this instead of cancel+recreate when only parameters change. May fail if the order is already fulfilled/cancelled or not editable by the current session.
| Name | Required | Description | Default |
|---|---|---|---|
| orderId | Yes | Target order id (`MObjectId`) to update. Order must still be open and editable. | |
| newPrice | No | Optional replacement unit price in SUN. | |
| receiver | No | Optional replacement receiver TRON address. |
Output Schema
| Name | Required | Description |
|---|---|---|
| code | Yes | |
| message | Yes | |
| success | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It indicates mutation ('update') and partial updates, but does not disclose side effects, authorization needs, or merge behavior. The existence of an output schema is not leveraged.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences front-load the main purpose and key features. No unnecessary words; every part adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers the essentials: action, target, and partial update capability. Given the presence of an output schema, return values are not needed. However, it could mention that orderId is required and that updates are partial.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description echoes the schema fields (receiver, newPrice) without adding new meaning beyond what is already in the input schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (update) and resource (existing order by orderId), and explicitly mentions partial updates on fields like receiver and newPrice, distinguishing it from sibling tools for creation, cancellation, or listing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use (to update an order), but provides no explicit guidance on when not to use or which alternative sibling tool to choose. No exclusions or context are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!