aarna-mcp
Server Details
Aarna - 34 tools for swap quotes, liquidity pools, and trading data
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- junct-bot/aarna-mcp
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
32 toolsassets_cross_chain_controller_get_pendle_assets_metadataAInspect
Get all assets (cross-chain) — Returns the flat list of all PT, YT, LP, and SY tokens supported in the Pendle app, across all chains. Each entry includes: name, symbol, contract address, decimals, expiry timestamp (for PT/YT), and icon URL. Filter by chainId, asset id, or type (pt, yt, lp, sy). Prices are not included — use Get asset prices to retrieve them separately. For chain-specific asset lists with richer metadata and prices bundled, use the chain-scoped Get all assets endpoint instead. Returns: { assets
| Name | Required | Description | Default |
|---|---|---|---|
| ids | No | Token ids to data for (comma-separated), leave blank to fetch all tokens. Up to 20 ids allowed. | |
| skip | No | Number of results to skip. | |
| type | No | Asset types to filter by (comma-separated). Valid values: `PENDLE_LP`, `SY`, `PT`, `YT`. Leave blank to fetch all assets types. | |
| limit | No | Maximum number of results to return. Leave blank to fetch all results. | |
| chainId | No | Chain id to filter by, leave blank to fetch all chains. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full disclosure burden and succeeds in describing the data structure (flat list), enumerating included fields (name, symbol, contract address, decimals, expiry, icon URL), and explicitly stating exclusions (prices). Minor gaps remain around rate limits, max pagination bounds, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with logical flow: purpose → data contents → filtering capabilities → exclusions/alternatives → return hint. Each sentence earns its place. Minor deduction for the trailing truncated text 'Returns: { assets' which appears to be a formatting artifact.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 parameters (all optional), no annotations, and no output schema, the description compensates effectively by detailing the return structure and field composition. It appropriately delegates parameter specifics to the well-documented schema while covering high-level behavior and sibling relationships thoroughly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 100% schema coverage (baseline 3), the description adds value by semantically mapping the abstract `type` parameter to concrete token categories 'pt, yt, lp, sy' and contextualizing the filter parameters (chainId, id, type) as operating on the cross-chain dataset. It reinforces that blank values fetch all results.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns a 'flat list of all PT, YT, LP, and SY tokens supported in the Pendle app, across all chains' — specific verb (Get/Returns), resource (assets/tokens), and scope (cross-chain, multi-type). It distinguishes from siblings by explicitly contrasting with the chain-scoped 'Get all assets' endpoint and the 'Get asset prices' endpoint.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly provides when-not-to-use guidance: 'Prices are not included — use [Get asset prices]... to retrieve them separately' and 'For chain-specific asset lists with richer metadata... use the chain-scoped [Get all assets]... instead'. Names specific alternatives, making selection criteria clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
chains_controller_get_supported_chain_idsBInspect
Get supported chain IDs Returns: { chainIds: number[] }.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It documents the return type structure '{ chainIds: number[] }', which compensates partially for the missing output schema. However, it omits other behavioral traits like rate limits, caching behavior, or whether 'supported' implies active trading pairs vs. basic RPC availability.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately brief for a simple tool, but suffers from awkward grammatical structure by concatenating the action and return value into a single sentence fragment. The return type information is front-loaded but not cleanly separated.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a parameterless tool with high inherent simplicity, the description is sufficiently complete. It documents the response format in lieu of an output schema, which fulfills the primary completeness requirement for this type of metadata endpoint.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, which establishes a baseline score of 4 per the evaluation guidelines. The description implicitly confirms no inputs are required by not mentioning any, which aligns with the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get') and resource ('supported chain IDs'), specifying what the tool retrieves. However, it lacks explicit differentiation from sibling cross-chain tools regarding which specific 'supported' chains (e.g., trading-enabled vs. bridged) are being referenced.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to invoke this tool versus the numerous sibling alternatives (e.g., cross-chain controllers). There are no prerequisites, ordering constraints, or conditions mentioned for selecting this over other chain-aware tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dashboard_controller_get_merkle_claimed_rewardsAInspect
Get all merkle claimed rewards for a user — Returns the historical record of all merkle rewards that a user has already claimed. Useful for displaying claim history in a dashboard or verifying that a specific reward distribution has been collected. Returns: { claimedRewards: { user: string, token: string, merkleRoot: string, chainId: number, assetId: string, amount: string, ... }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| user | Yes | Ethereum address of the user |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses the read-only nature and documents the return structure shape inline (compensating for missing output schema), but omits pagination behavior, rate limits, or error conditions for the 'all' query.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Efficiently structured with clear purpose statement, usage context, and explicit return type documentation. The return structure JSON is verbose but necessary given lack of output schema; every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter read tool, description is complete. It compensates for missing output schema by documenting return fields (user, token, merkleRoot, etc.), though could note pagination or result limits for 'all' queries.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (user parameter fully documented as Ethereum address). Description implies the user context ('for a user') but doesn't add syntax or format details beyond what the schema provides. Baseline 3 appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Uses specific verb 'Get' with resource 'merkle claimed rewards'. Clearly identifies this retrieves historical claim records versus current positions or pending rewards, though could explicitly distinguish from sibling dashboard_controller_get_user_positions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides usage context ('Useful for displaying claim history... or verifying... collected') but lacks explicit guidance on when to use alternatives or when not to use this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dashboard_controller_get_user_positionsAInspect
Get user positions by address — Returns a complete snapshot of a user's Pendle positions across all supported chains, including PT, YT, LP, and SY holdings. For each position the response includes token amounts, USD valuations, and claimable interest/reward amounts. Caching: claimable reward amounts are cached for up to 24 hours. For real-time claimable data, read directly from the reward contracts via RPC. Use the chainId query parameter to filter results to a specific chain. Returns: { positions: { chainId: number, totalOpen: number, totalClosed: number, totalSy: number, openPositions:
| Name | Required | Description | Default |
|---|---|---|---|
| user | Yes | path parameter: user (string) | |
| filterUsd | No | Minimum USD value threshold to filter positions |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and discloses critical behavioral traits: claimable reward amounts are cached for 24 hours, the tool returns complete snapshots including USD valuations across all supported chains, and it specifies the position types (PT, YT, LP, SY) included in the response.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the essential action and resource, uses bold formatting effectively to highlight caching behavior, and groups related information logically. However, it is marred by a truncated return value schema at the end and the inclusion of a phantom parameter reference.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema and annotations, the description compensates reasonably well by detailing the response contents (position types, valuations, claimable amounts) and caching semantics. However, it fails to provide complete return structure documentation (cut off mid-definition) and omits error handling or authentication requirements.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with both parameters documented. The description adds semantic value by clarifying that the 'user' parameter expects an 'address.' However, it detracts value by referencing a non-existent `chainId` parameter, creating confusion about the actual interface.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Get[s] user positions by address' and specifies the resource as 'Pendle positions across all supported chains, including PT, YT, LP, and SY holdings.' This distinguishes it from sibling tools that handle orders, market data, or transactions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description provides one clear exclusion—'For real-time claimable data, read directly from the reward contracts via RPC'—it erroneously instructs users to 'Use the `chainId` query parameter to filter results,' which does not exist in the input schema. It fails to distinguish when to use this tool versus sibling dashboard tools like `get_merkle_claimed_rewards`.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
limit_orders_controller_create_orderAInspect
Create limit order — Submit a signed limit order to the Pendle order book. Once created, the order is visible to takers and will be filled against incoming swaps at or better than the specified implied APY. Before calling this, generate the order payload via Generate limit order data, sign it with your wallet, and include the signature in the request body. Common rejection reasons: the YT or market is not whitelisted for limit orders, the order has already expired, or the signature is invalid. Returns: { id: string, sig
| Name | Required | Description | Default |
|---|---|---|---|
| yt | Yes | YT address | |
| salt | Yes | BigInt string of salt | |
| type | Yes | LimitOrderType { 0 : TOKEN_FOR_PT, 1 : PT_FOR_TOKEN, 2 : TOKEN_FOR_YT, 3 : YT_FOR_TOKEN } | |
| maker | Yes | Maker address | |
| nonce | Yes | BigInt string of nonce | |
| token | Yes | Token used by user to make order | |
| expiry | Yes | BigInt string of expiry | |
| permit | Yes | Bytes string for permit | |
| chainId | Yes | Chain Id | |
| receiver | Yes | Receiver address | |
| signature | Yes | Signature of order, signed by maker | |
| failSafeRate | Yes | BigInt string of failSafeRate | |
| makingAmount | Yes | BigInt string of making amount | |
| lnImpliedRate | Yes | BigInt string of lnImpliedRate |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It successfully discloses filling behavior (against incoming swaps at/better than APY), visibility to takers, and specific failure modes. Minor gap: doesn't mention gas costs, idempotency, or blockchain finality semantics typical for such operations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information-dense with logical flow: action → behavior → prerequisites → failure modes → return value. Front-loaded with purpose. Minor deduction: return value description is truncated ('Returns: { id: string, sig'), though the intent to document output is clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complex operation (14 required params, blockchain interaction, no output schema). Description adequately covers the critical path: prerequisites, validation failures, and partial return structure. Slight gap in incomplete return documentation, but sufficient for operational use given the prerequisite workflow is fully specified.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage (baseline 3). Description adds crucial workflow context explaining that `signature` must be wallet-signed and references the generation step for payload construction. Mentions 'implied APY' giving semantic context to `lnImpliedRate` beyond the schema's 'BigInt string' typing.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description provides specific verb+resource ('Submit a signed limit order to the Pendle order book'), clearly distinguishes from sibling `generate_limit_order_data` by explicitly naming it as a required prerequisite step, and explains the market mechanics (visible to takers, filling at specified APY).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Excellent workflow guidance: explicitly states when to use ('Before calling this, generate...'), lists the exact prerequisite tool with hyperlink reference, mandates signature generation with wallet, and documents common rejection reasons (whitelist, expiry, invalid signature) to guide error handling.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
limit_orders_controller_generate_limit_order_dataAInspect
Generate limit order data for signing — Generate the EIP-712 typed data payload for a limit order. Sign the returned data with the maker's private key, then submit the order via Create limit order. The generated order specifies the YT address, direction (long or short yield), size, and implied APY target. The order remains valid until it is either fully filled, cancelled, or expired. Returns: { chainId: number, YT: string, salt: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| YT | Yes | YT address | |
| maker | Yes | Maker address | |
| token | Yes | Input token if type is TOKEN_FOR_PT or TOKEN_FOR_YT, output token otherwise | |
| expiry | Yes | Timestamp of order's expiry, in seconds | |
| chainId | Yes | Chain Id | |
| orderType | Yes | LimitOrderType { 0 : TOKEN_FOR_PT, 1 : PT_FOR_TOKEN, 2 : TOKEN_FOR_YT, 3 : YT_FOR_TOKEN } | |
| impliedApy | Yes | Implied APY of this limit order | |
| makingAmount | Yes | BigInt string of making amount, the amount of token if the order is TOKEN_FOR_PT or TOKEN_FOR_YT, otherwise the amount of PT or YT |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses critical security context (requires signing with maker's private key) and order lifecycle (valid until filled/cancelled/expired). Could clarify whether this operation is stateless/off-chain or creates server-side records.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with em-dash subtitle, logical flow from generation to signing to submission, followed by parameter semantics and return type. Every sentence earns its place; no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given high complexity (DeFi, EIP-712) and lack of output schema/annotations, description adequately covers return structure (chainId, YT, salt, etc.) and workflow integration. Completeness gap is minor: missing distinction from the 'scaled' variant sibling.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (baseline 3), but description adds valuable semantic mapping: 'size' maps to makingAmount, 'direction (long or short yield)' maps to orderType enum (0-3), and 'YT address' confirms YT parameter purpose. This aids agent understanding beyond raw schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: states it generates 'EIP-712 typed data payload for a limit order' and distinguishes itself from the submission sibling (limit_orders_controller_create_order) by clarifying this tool only prepares data for signing, not submission.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit multi-step workflow (generate → sign with private key → submit via Create limit order). Clearly references the correct sibling for submission. Lacks explicit guidance on when to use vs limit_orders_controller_generate_scaled_limit_order_data (the scaled variant).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
limit_orders_controller_generate_scaled_limit_order_dataAInspect
Generate list of limit orders (scaled) for signing — Generate a batch of limit orders spread across a price (implied APY) range for signing. Scaled orders distribute your total size across multiple price levels, providing better market depth than a single large order. Sign each generated order individually, then submit them via Create limit order. Useful for market-making strategies. Returns: { orders: { chainId: number, YT: string, salt: string, expiry: string, nonce: string, token: string, ... }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| YT | Yes | YT address | |
| maker | Yes | Maker address | |
| token | Yes | Input token if type is TOKEN_FOR_PT or TOKEN_FOR_YT, output token otherwise | |
| expiry | Yes | Timestamp of order's expiry, in seconds | |
| chainId | Yes | Chain Id | |
| orderType | Yes | LimitOrderType { 0 : TOKEN_FOR_PT, 1 : PT_FOR_TOKEN, 2 : TOKEN_FOR_YT, 3 : YT_FOR_TOKEN } | |
| orderCount | Yes | Upper implied APY of this scaled order | |
| makingAmount | Yes | BigInt string of making amount, the amount of token if the order is TOKEN_FOR_PT or TOKEN_FOR_YT, otherwise the amount of PT or YT | |
| lowerImpliedApy | Yes | Lower implied APY of this scaled order | |
| upperImpliedApy | Yes | Upper implied APY of this scaled order | |
| sizeDistribution | Yes | Scaled Order Distribution Type { } |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and discloses the return structure explicitly ({ orders: { chainId, YT, salt... }[] }), explains the non-destructive generation behavior, and clarifies the scaling mechanics. It could improve by mentioning idempotency or rate limits, but covers the essential behavioral context well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The opening phrase repeats the tool name/action, but the rest is efficiently structured: purpose explanation, workflow steps, use case, and return format. Each sentence serves a distinct function and the description is appropriately sized for a complex 11-parameter tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex tool with 11 parameters and no output schema or annotations, the description is comprehensive: it details the input purpose, the signing workflow, the specific market-making use case, and explicitly documents the return JSON structure, fully compensating for missing structured metadata.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds conceptual context about how parameters interact (e.g., 'spread across a price range' connects lower/upperImpliedApy, 'distribute your total size' relates to makingAmount and sizeDistribution) but does not add syntactic details beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool generates 'a batch of limit orders spread across a price (implied APY) range' and distinguishes itself from the sibling 'generate_limit_order_data' by emphasizing the 'scaled' nature (distributing total size across multiple price levels vs a single large order), clearly defining its specific scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It provides explicit workflow guidance: generate data here, then 'Sign each generated order individually, then submit them via [Create limit order]' (referencing the sibling create_order tool). It also specifies the strategic use case: 'Useful for market-making strategies,' clearly delineating when to use this scaled approach.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
limit_orders_controller_get_all_archived_limit_ordersBInspect
Get all archived limit orders for analytics — This have the same interface and usecase as the endpoint above, but it returns the archived orders When an order is not fillable anymore, we might archive it to save storage space, to fetch it, use this endpoint. So to fetch full limit orders in history, using this and the endpoint above. Not all orders are archived, it depends on some conditions. Returns: { total: number, limit: number, results: { id: string, signature: string, chainId: number, salt: string, expiry: string, nonce: string, ... }[], ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| yt | No | Market address to filter orders by | |
| limit | No | Maximum number of results to return. The parameter is capped at 1000. | |
| maker | No | Maker address to filter orders by | |
| chainId | No | Chain id to filter by, leave blank to fetch all chains. | |
| resumeToken | No | Resume token for pagination | |
| timestamp_end | No | query parameter: timestamp_end (string) | |
| timestamp_start | No | query parameter: timestamp_start (string) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully explains the archiving logic (orders archived to 'save storage space', conditional on 'some conditions') and includes return structure details. However, it lacks explicit confirmation of read-only/safe behavior despite using 'Get', and omits rate limit or pagination behavior details beyond the raw return shape.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description suffers from grammatical errors ('This have'), run-on sentences ('When an order... use this endpoint'), and fragments ('So to fetch... using this'). The inclusion of a raw JSON-like return structure dump ('Returns: { total: number... }') is verbose and poorly formatted, reducing readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complex domain (archived orders, pagination via resumeToken) and lack of annotations, the description adequately explains the archiving concept and conditional nature of archival. However, the messy presentation of return values and lack of explicit safety indicators leave gaps in contextual completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description adds value by stating parameters have the 'same interface... as the endpoint above,' which helps agents understand parameter semantics relative to the sibling tool, but does not elaborate on individual parameter behaviors or validation rules.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and resource ('archived limit orders') and adds context ('for analytics'). It effectively distinguishes from siblings by explaining what 'archived' means and referencing 'the endpoint above' (likely `limit_orders_controller_get_all_limit_orders`) to contrast active vs archived orders.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear guidance on when to use this tool (when orders are 'not fillable anymore' and potentially archived) and explicitly recommends combining it with the sibling endpoint 'above' to fetch 'full limit orders in history.' This establishes the relationship between the two query tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
limit_orders_controller_get_all_limit_ordersAInspect
Get all limit orders for analytics — This endpoint is for analytics purpose, if you want to analyze the limit orders data, this endpoint return all the orders that have been made, including the ones that have been cancelled or fully filled. The results could be very large, so each time we returns at most 1000 orders, you can use the resumeToken to fetch the next page. To get limit order for filling, use the Get limit orders to match by YT address endpoint! Returns: { total: number, limit: number, results: { id: string, signature: st
| Name | Required | Description | Default |
|---|---|---|---|
| yt | No | Market address to filter orders by | |
| limit | No | Maximum number of results to return. The parameter is capped at 1000. | |
| maker | No | Maker address to filter orders by | |
| chainId | No | Chain id to filter by, leave blank to fetch all chains. | |
| resumeToken | No | Resume token for pagination | |
| timestamp_end | No | query parameter: timestamp_end (string) | |
| timestamp_start | No | query parameter: timestamp_start (string) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Strong disclosure without annotations: explains pagination (1000 order cap, resumeToken usage), warns about large result sets, and clarifies data completeness (includes cancelled/fully filled). Minor gap: no mention of authentication requirements or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded purpose is good, but contains redundancy ('for analytics purpose' vs 'analyze the limit orders data'). The trailing 'Returns: { total: number...' appears to be truncated inline JSON schema dump, which is messy and incomplete.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Rich behavioral context compensates for missing annotations and output schema. Covers pagination mechanics, data lifecycle (cancelled/filled inclusion), and sibling differentiation. Incomplete return value documentation (cut-off JSON) is the only significant gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage, establishing baseline. Description adds valuable context for 'resumeToken' (pagination flow) and 'limit' (1000 cap), but does not elaborate syntax/semantics for timestamp filters or address formats beyond schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent clarity: specific verb 'Get', resource 'limit orders', scope 'for analytics' including cancelled/filled orders, and explicit differentiation from sibling 'limit_orders_controller_get_taker_limit_orders' via the warning to use that endpoint instead for filling orders.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit when-to-use ('for analytics purpose'), when-not-to-use (contrast with filling orders), and named alternative endpoint '[Get limit orders to match by YT address]' (the taker endpoint). Clear guidance prevents misselection between analytics and trading workflows.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
limit_orders_controller_get_limit_order_book_v2AInspect
Get order book v2 — Returns the consolidated order book for a market, aggregating both limit orders and AMM liquidity depth into a unified view. The book is split into two sides: - longYieldEntries: available liquidity for buying YT (long yield positions), ordered by ascending implied APY - shortYieldEntries: available liquidity for selling YT (short yield positions), ordered by descending implied APY Each entry shows the implied APY price level, combined limit order size, and AMM depth at that level. Returns: { longYieldEntries: { impliedApy: number, limitOrderSize: number, ammSize: n
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results to return. The parameter is capped at 200. | |
| market | Yes | Market address | |
| chainId | Yes | path parameter: chainId (number) | |
| includeAmm | No | Include AMM orders in the order book | |
| precisionDecimal | Yes | Min: 0, Max: 3, returned impliedApy will have precision upto 10^{-precisionDecimal}% |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It successfully discloses aggregation behavior (combines limit orders + AMM), sorting criteria (implied APY ordering), and return structure (the two entry types with their fields). Missing only operational details like rate limits or caching.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with action verb and version identifier. Markdown formatting effectively structures the two-sided book explanation. However, the return value documentation is truncated mid-schema ('ammSize: n'), slightly undermining structural completeness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema exists and this is a complex financial tool (5 params, yield trading domain), the description adequately covers domain concepts (YT positions, implied APY) and attempts to document return value structure despite truncation. Sufficient for agent to construct valid calls and interpret responses.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description does not explicitly detail individual parameters, though it implicitly clarifies that precisionDecimal affects the impliedApy decimal places in output. No additional syntax or format details added beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: declares it returns a 'consolidated order book' aggregating 'limit orders and AMM liquidity depth' — clearly distinguishes from siblings like get_all_limit_orders (individual orders) by emphasizing the unified market depth view. Specifies domain (YT tokens) and ordering logic (ascending/descending implied APY).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implicit usage context by explaining what each side represents (buying vs selling YT) and how entries are ordered, which helps interpret results. However, lacks explicit guidance on when to use this v2 versus other limit order getters, or when to set includeAmm=false.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
limit_orders_controller_get_maker_limit_orderAInspect
Get user limit orders in market — Returns the active and historical limit orders placed by a specific user in a market. Supports richer filtering than the analytics endpoint (e.g. by YT, status, market). A user can have at most 50 open orders per market, so pagination is typically unnecessary here. For full cross-user analytics with cursor-based pagination, use Get all limit orders instead. Returns: { total: number, limit: number, skip: number, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| yt | No | Order's YT address | |
| skip | No | Number of results to skip. The parameter is capped at 1000. | |
| type | No | LimitOrderType { 0 : TOKEN_FOR_PT, 1 : PT_FOR_TOKEN, 2 : TOKEN_FOR_YT, 3 : YT_FOR_TOKEN } | |
| limit | No | Maximum number of results to return. The parameter is capped at 100. | |
| maker | Yes | Maker's address | |
| chainId | Yes | ChainId | |
| isActive | No | isActive=true to get all maker's active orders, isActive=false otherwise and do not set isActive if you want to fetch all maker's orders |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description carries full burden and succeeds well: discloses return structure '{ total: number, limit: number, skip: number, ... }', explains practical pagination behavior (50 open order limit), and clarifies scope includes both active and historical orders. Missing only safety/auth details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly structured: front-loaded purpose, followed by differentiation, constraint explanation (50 orders), explicit alternative reference, and return format. Five sentences, zero waste, logical flow from what-it-does to when-to-use-something-else.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Strong for a complex DeFi/trading domain with 7 parameters and no output schema: compensates for missing structured return type by documenting response shape, explains pagination caps, and covers filtering scope. Would be 5 with explicit mention of required auth or rate limit notes.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage, establishing baseline 3. Description adds semantic context by referencing filtering examples 'by YT, status, market' which maps to the yt/isActive parameters, but does not elaborate on parameter syntax beyond schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: 'Get user limit orders in market' provides clear verb+resource+scope, and it explicitly distinguishes from sibling 'Get all limit orders' (cross-user analytics) and implies distinction from 'get_taker_limit_orders' by specifying 'placed by a specific user' (maker side).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Strong guidance: explicitly names alternative tool '[Get all limit orders]' for cross-user analytics, notes pagination is 'typically unnecessary' due to 50-order limit, and contrasts itself with 'the analytics endpoint' (likely referring to get_all_limit_orders).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
limit_orders_controller_get_taker_limit_ordersAInspect
Get limit orders to match by YT address — Returns the best-matching active limit orders for a given YT address, sorted by implied APY for efficient taker fill selection. The response includes full order structs and maker signatures ready to pass directly to the Pendle limit order contract for on-chain settlement. Only active orders (unfilled, uncancelled, unexpired) are returned. For analytics and full order history (including filled/cancelled), use Get all limit orders instead. Returns: { total: number, limit: number, skip: number, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| yt | Yes | Order's YT address | |
| skip | No | Number of results to skip. The parameter is capped at 1000. | |
| type | Yes | LimitOrderType { 0 : TOKEN_FOR_PT, 1 : PT_FOR_TOKEN, 2 : TOKEN_FOR_YT, 3 : YT_FOR_TOKEN } | |
| limit | No | Maximum number of results to return. The parameter is capped at 100. | |
| sortBy | No | query parameter: sortBy ("Implied Rate") | |
| chainId | Yes | ChainId | |
| sortOrder | No | query parameter: sortOrder ("asc" | "desc") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden and succeeds by stating 'Only active orders (unfilled, uncancelled, unexpired) are returned', the sorting behavior 'sorted by implied APY', and that responses include 'maker signatures ready to pass directly to the Pendle limit order contract for on-chain settlement', revealing the output's consumable format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description comprises four distinct sentences covering purpose, response content, filtering constraints, and alternative routing, plus a brief return structure hint. Each sentence earns its place by conveying distinct actionable information, though the final 'Returns:' fragment is slightly informal compared to formal output schema definitions.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 100% input schema coverage and lack of output schema, the description adequately compensates by detailing the return structure implicitly ('full order structs', pagination fields) and explaining the active-order filtering logic. It sufficiently covers the 7-parameter complexity for a domain-specific DeFi matching operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, establishing a baseline of 3. The description references 'YT address' which aligns with the 'yt' parameter and mentions 'implied APY' suggesting the sortBy behavior, but does not significantly elaborate on parameter relationships, formats, or validation beyond what the schema already documents.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with the specific verb 'Get', the resource 'limit orders', and the scope 'to match by YT address'. It explicitly distinguishes itself from the sibling tool limit_orders_controller_get_all_limit_orders by stating 'For analytics and full order history... use [Get all limit orders] instead', clearly defining its specific taker-matching purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It provides explicit guidance on when NOT to use the tool ('For analytics and full order history... use... instead'), directly naming the alternative. It also implies the primary use case through 'efficient taker fill selection' and 'ready to pass directly to... contract', clarifying this is for execution rather than historical analysis.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
markets_controller_market_data_v2AInspect
Get latest/historical market data by address — Returns the latest or historical data snapshot for a given market. Pass a timestamp query param (Unix seconds) to retrieve the market state at a specific point in time. Omit it for the current snapshot. For time-series data (charts/analytics), prefer the historical-data endpoint. Returns: { timestamp: string, liquidity: { usd: number, acc: number }, tradingVolume: { usd: number, acc: number }, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | path parameter: address (string) | |
| chainId | Yes | path parameter: chainId (number) | |
| timestamp | No | query parameter: timestamp (string) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully explains the snapshot behavior (current vs. historical), documents the return structure with field types ({ timestamp: string, liquidity: {...}, tradingVolume: {...} }), and clarifies timestamp format (Unix seconds). Could only improve by mentioning rate limits or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear front-loading (purpose in first clause), logical flow (usage → alternative → returns), and zero redundant sentences. Slightly dense but efficient given the information density covering parameter semantics, sibling differentiation, and return schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Exceptionally complete given constraints: no annotations and no structured output schema, yet the description fully compensates by inline-documenting the return object structure, explaining all three parameters' purpose, and providing sibling tool guidance necessary for correct tool selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage (baseline 3). The description adds significant semantic value by specifying that timestamp uses 'Unix seconds' format and explaining the omission behavior for 'current snapshot'—context not present in the schema's mechanical 'query parameter: timestamp (string)' description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'latest/historical market data by address' using specific verbs (Get/Returns) and resource type (market data snapshot). It explicitly distinguishes from the sibling 'historical-data' endpoint by stating this is for snapshots while the sibling is for 'time-series data (charts/analytics)'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use the optional timestamp parameter ('Pass a timestamp...to retrieve the market state at a specific point of time. Omit it for the current snapshot') and provides clear alternative guidance ('For time-series data...prefer the [historical-data] endpoint'), directly referencing the sibling tool by functional name.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
markets_controller_market_historical_data_v2BInspect
Get market time-series data by address — Returns the time-series data for a given market. Useful to draw charts or do data analysis. This endpoint supports field selection via the fields query parameter. Table below shows the available fields and their descriptions. | Field | Description | |-------|-------------| | timestamp | Timestamp in ISO format| | baseApy | APY including yield, swap fee and Pendle rewards without boosting| | impliedApy | Implied APY of market| | lastEpochVotes | Last epoch votes| | lpPrice | LP price in USD| | lpRewardApy | APY from LP reward tokens| | maxApy | APY whe
| Name | Required | Description | Default |
|---|---|---|---|
| fields | No | Comma-separated list of fields to include in the response. Use `all` to include all fields. Available fields could be found in the table above. Although you could use `all` to include all fields, it is not recommended because the bigger the payload is, the slower the response will be. | |
| address | Yes | path parameter: address (string) | |
| chainId | Yes | path parameter: chainId (number) | |
| time_frame | No | query parameter: time_frame ("hour" | "day" | "week") | |
| timestamp_end | No | query parameter: timestamp_end (string) | |
| timestamp_start | No | query parameter: timestamp_start (string) | |
| includeFeeBreakdown | No | Whether you want to fetch fee breakdown data. Default is false. If enable, the response will include 3 fields: explicitSwapFee, implicitSwapFee, limitOrderFee and computing unit cost will be doubled. Fee breakdown is only available for daily and weekly timeframes. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It adds valuable performance context warning that using 'all' fields creates slower responses due to payload size. However, lacks disclosure on rate limits, caching, or explicit read-only nature despite being implied by 'Get'.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Structure is logical (purpose → use case → features → field reference) and front-loaded. However, contains slight redundancy ('Get... Returns') and the markdown table is abruptly truncated mid-cell ('APY whe'), suggesting poor formatting or length management.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Attempts to compensate for missing output schema by embedding a field reference table directly in the description, which is appropriate. However, the table is incomplete in the provided text, and the description does not clarify return value structure beyond the field list.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, baseline is 3. The description adds meaningful strategy guidance beyond the schema: explaining the field selection concept, documenting the available fields in an inline table (though truncated), and warning against using 'all' fields for performance reasons.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with specific verb+resource+identifier ('Get market time-series data by address') and clarifies the output is for 'charts or data analysis.' However, it does not explicitly differentiate from sibling 'markets_controller_market_data_v2' (likely current vs. historical data), though 'time-series' provides implicit distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage context ('Useful to draw charts'), but lacks explicit when-to-use guidance versus alternatives like the non-historical market data endpoint. No prerequisites or exclusion criteria mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
markets_cross_chain_controller_get_all_marketsAInspect
Get all markets (cross-chain) — Returns the complete list of whitelisted Pendle markets across all supported chains with their metadata and current data. Filter by chainId, isActive, or ids (comma-separated market IDs in chainId-address format). This is the recommended starting point for discovering and monitoring Pendle markets across all chains. For chain-scoped data with additional fields, use Get markets instead. Returns: { markets: { name: string, address: string, expiry: string, pt: string, yt: string, sy: string, ... }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| ids | No | Market ids to fetch metadata for (comma-separated), leave blank to fetch all markets. Up to 20 ids allowed. | |
| chainId | No | Filter to markets on a specific blockchain network | |
| isActive | No | Filter to active or inactive markets |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description compensates well by disclosing 'whitelisted' scope (not all markets), and crucially documents return structure '{ markets: { name: string, address... } }' since no output schema exists. Only gap is lack of rate limit or pagination details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well front-loaded with purpose, followed by filtering capabilities, usage recommendation, alternative reference, and return structure. Zero wasted words; every clause provides distinct value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Strong completeness for a 3-parameter tool. Compensates for missing output schema by documenting return shape. Mentions all three filter parameters. Minor gap: doesn't clarify behavior when all filters omitted (returns all markets implied but not explicit).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (baseline 3), but description adds crucial syntax detail for 'ids' parameter: 'chainId-address format' and 'comma-separated', which clarifies the composition pattern not explicitly stated in schemadescription.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent: specific verb 'Get', resource 'markets', scope 'cross-chain' and 'all supported chains', plus explicit distinguishing from chain-scoped alternative '[Get markets]' which maps to siblings like markets_controller_market_data_v2.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit usage guidance: states this is the 'recommended starting point for discovering and monitoring' and directly references the alternative tool for 'chain-scoped data with additional fields', providing clear when-to-use vs when-not-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
markets_cross_chain_controller_get_points_marketsAInspect
Get points markets — Returns all active markets that have a points reward programme — markets where trading or providing liquidity earns points from the underlying protocol. The response includes the points configuration for each market (point name, reward rate, etc.) and the associated market ID. To fetch full market metadata for these markets, call Get all markets and filter by the returned IDs. Returns: { markets: { id: string, points: unknown[] }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| chainId | No | Filter to markets on a specific blockchain network | |
| isActive | No | Filter to active or inactive markets |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and documents the return structure explicitly ('Returns: { markets: { id: string, points: unknown[] }[] }'). It clarifies what data is returned (points configuration, reward rates) but could explicitly state this is read-only/safe.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four dense sentences covering purpose, scope, usage guidelines, and return structure. Front-loaded with the action and resource. Minor verbosity with em-dash repetitions ('Get points markets — Returns...'), but every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a filtered retrieval tool: manually documents return JSON structure (compensating for lack of output schema), explains scope (points programmes only), and references sibling relationship. Only lacking auth or rate limit details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (both chainId and isActive fully documented in schema). The description mentions 'active markets' which loosely maps to isActive parameter, but adds no syntax or format details beyond the schema. Baseline 3 appropriate for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'points markets' (markets with points reward programmes) and distinguishes this from sibling tools like 'get_all_markets' by specifying the points-specific scope. Uses specific verb+resource pattern ('Get points markets').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly directs users to the sibling tool 'Get all markets' (markets_cross_chain_controller_get_all_markets) for 'full market metadata,' establishing clear when-to-use guidance. This demonstrates proper tool chaining guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pendle_emission_controller_pendle_emissionBInspect
Get Pendle Emission — Returns the latest confirmed PENDLE emission across all eligible markets. Each market includes a breakdown of emission by TVL, fee, discretionary, and co-bribing components. Returns: { markets: { chainId: number, address: string, totalIncentive: number, tvlIncentive: number, feeIncentive: number, discretionaryIncentive: number, ... }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full disclosure burden. It adds valuable context by specifying 'latest confirmed' (temporal freshness) and provides the complete return structure inline (compensating for the missing output schema). However, it fails to explicitly confirm the read-only/safe nature of the operation, mention authentication requirements, or disclose rate limiting behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description efficiently combines purpose and return structure. While slightly dense with the inline JSON return specification, this is appropriate given the lack of a structured output schema. No redundant sentences; every clause adds specificity about the emission components or return format.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero input parameters and no output schema, the description adequately compensates by documenting the return structure in text. However, it lacks operational constraints (rate limits, authentication requirements) and error handling guidance, leaving gaps in contextual completeness for an agent determining invocation readiness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool accepts zero parameters (empty properties object with 100% schema coverage). With no parameters to document, the baseline score of 4 applies. The description correctly avoids inventing parameter documentation where none exist.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states what the tool retrieves ('latest confirmed PENDLE emission') and specifies the resource scope ('across all eligible markets'). It distinguishes from siblings like markets_controller_market_data_v2 by targeting specific emission/reward data rather than general market data, and details specific breakdown components (TVL, fee, discretionary, co-bribing).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like dashboard_controller_get_merkle_claimed_rewards (which retrieves claimed rewards) or other market data tools. There are no 'when not to use' exclusions or prerequisites mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
prices_controller_ohlcv_v4AInspect
Get PT / YT / LP historical price by address — Historical price data for PT / YT tokens / LP tokens. We do not support historical prices for SY and non-Pendle tokens. The data is OHLCV data, returned in CSV format with open, high, low, close prices, and volume. In the case of LP, volume data will be 0. To get the correct volume, use our Get market time-series data by address endpoint. Returns at most 1440 data points. The cost for the endpoint is based on how many data points are returned. The calculation is: `ceil(number o
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | path parameter: address (string) | |
| chainId | Yes | path parameter: chainId (number) | |
| time_frame | No | Time interval for OHLCV data aggregation. Valid values: `hour`, `day`, `week`. | |
| timestamp_end | No | ISO Date string of the end time you want to query | |
| timestamp_start | No | ISO Date string of the start time you want to query |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Rich behavioral disclosure despite no annotations: declares CSV return format, OHLCV structure, 1440 data point limit, LP-specific volume quirk (returns 0), and cost calculation model ('ceil'). No contradiction with annotations (none provided).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information-dense with minimal waste; initial clause slightly redundant ('Get... by address — Historical price data for...'). Structure is single-paragraph stream, but key constraints (format, limits, cost) are front-loaded before cutoff. Every sentence delivers distinct behavioral or scoping information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Without output schema, description fully compensates by detailing CSV structure (open/high/low/close/volume), max return size, and cost implications. Domain-specific caveats (Pendle-only, LP volume exception) provide complete decision context for the 5-parameter tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, baseline is 3. Description adds crucial semantic context that 'address' parameter refers specifically to PT/YT/LP token contracts and implies time parameters define OHLCV windows. Does not reach 5 due to lack of parameter interaction examples or format guidance beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description opens with specific verb ('Get') and resource ('PT / YT / LP historical price'), clarifying scope to Pendle-specific token types. Explicitly excludes 'SY and non-Pendle tokens', distinguishing from broader price tools like prices_cross_chain_controller_get_all_asset_prices_by_addresses_cross_chains.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when NOT to use (SY/non-Pendle tokens unsupported). Names specific alternative endpoint for LP volume data ('To get the correct volume, use our [Get market time-series data by address]...'), directly guiding selection against sibling markets_controller_market_historical_data_v2.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
prices_cross_chain_controller_get_all_asset_prices_by_addresses_cross_chainsAInspect
Get asset prices (cross-chain) — Returns USD prices for Pendle-supported tokens across all chains. Covers all token types in the Pendle app, including non-Pendle tokens (USDC, WETH, etc.) when they appear as underlying assets. Prices update approximately every minute. Filter by chainId, asset id, or type to narrow results. For real-time PT/YT pre-trade prices that reflect current pool depth, use Get swapping prices instead. Returns: { prices: object, total: number, skip: number, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| ids | No | Token ids to data for (comma-separated), leave blank to fetch all tokens. Up to 20 ids allowed. | |
| skip | No | Number of results to skip. | |
| type | No | Asset types to filter by (comma-separated). Valid values: `PENDLE_LP`, `SY`, `PT`, `YT`. Leave blank to fetch all assets types. | |
| limit | No | Maximum number of results to return. Leave blank to fetch all results. | |
| chainId | No | Chain id to filter by, leave blank to fetch all chains. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully adds critical behavioral context: price staleness ('update approximately every minute'), scope limitations (non-Pendle tokens only when underlying), and distinguishes between aggregated prices versus real-time pool-depth prices. Lacks mention of rate limits or auth requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Efficiently structured with zero waste: front-loaded purpose ('Get asset prices'), followed by scope clarification, behavioral trait (update frequency), usage guidance (filtering), sibling differentiation, and return format. Each sentence delivers distinct value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking an output schema, the description manually documents the return structure ('Returns: { prices: object, total: number, skip: number, ... }'). It comprehensively covers the tool's domain (cross-chain support), pagination behavior, and alternative tools. Minor gaps regarding error handling or rate limiting prevent a perfect score.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description mentions 'Filter by `chainId`, asset `id`, or `type` to narrow results', adding semantic context that these parameters serve as filters for reducing result sets, but does not elaborate on parameter syntax or relationships beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific verb 'Get', resource 'asset prices', and scope 'cross-chain' across 'all chains'. It distinguishes coverage by specifying it includes both Pendle tokens and non-Pendle underlying assets (USDC, WETH, etc.), differentiating it from more specialized sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly identifies when NOT to use this tool versus the sibling SDK endpoint: 'For real-time PT/YT pre-trade prices that reflect current pool depth, use [Get swapping prices]... instead'. This provides clear guidance on selecting alternatives based on latency requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
sdk_controller_cancel_all_limit_ordersAInspect
Cancel all limit orders — Generate the transaction payload to cancel all active limit orders for a user in a single on-chain call. This works by incrementing the user's nonce on-chain, which invalidates all previously signed orders at once. More efficient than cancelling orders one by one when clearing all open positions. Returns: { method: string, contractCallParamsName: string[], contractCallParams: unknown[][], ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| chainId | Yes | path parameter: chainId (number) | |
| userAddress | Yes | User Address |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Strong behavioral disclosure despite no annotations: explains the on-chain mechanism ('incrementing the user's nonce'), the atomic effect ('invalidates all previously signed orders at once'), and documents the return structure ({ method, contractCallParams... }) even without an output schema. Could enhance with auth/gas implications but covers core behavior well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with front-loaded purpose ('Cancel all limit orders —'). Information density is high with four distinct informative segments (action, mechanism, efficiency comparison, returns). Slightly verbose inline return type documentation, but every sentence provides value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete for a blockchain transaction tool: covers purpose, mechanism, efficiency rationale, and return payload structure. Absence of annotations is mitigated by descriptive depth. Could note explicit prerequisites (signing requirements) but sufficient for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds contextual mapping ('user' implies userAddress, 'on-chain' implies chainId requirement) but does not elaborate on parameter formats, valid chainId ranges, or address validation rules beyond the schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent clarity: specifies 'Generate the transaction payload to cancel all active limit orders,' identifies the resource (limit orders), and distinguishes from siblings by emphasizing 'single on-chain call' versus one-by-one cancellation (referencing sdk_controller_cancel_single_limit_order). The mechanism explanation (nonce incrementing) further differentiates the approach.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear usage context: states it's 'More efficient than cancelling orders one by one when clearing all open positions,' which guides the agent toward using this for bulk cancellation. Implicitly references the sibling single-cancel tool but does not explicitly name it or state when NOT to use this tool (e.g., when preserving some orders).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
sdk_controller_cancel_single_limit_orderAInspect
Cancel one single limit order by order hash — Generate the transaction payload to cancel a specific limit order on-chain. Pass the full signed order struct (from the original order creation) to identify which order to cancel. The order becomes invalid once the cancellation transaction is confirmed. Returns: { method: string, contractCallParamsName: string[], contractCallParams: unknown[][], ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| YT | Yes | YT address | |
| salt | Yes | BigInt string of salt | |
| maker | Yes | Maker address | |
| nonce | Yes | BigInt string of nonce | |
| token | Yes | Token used by user to make order | |
| expiry | Yes | BigInt string of expiry | |
| permit | Yes | Bytes string for permit | |
| chainId | Yes | path parameter: chainId (number) | |
| receiver | Yes | Receiver address | |
| orderType | Yes | LimitOrderType { 0 : TOKEN_FOR_PT, 1 : PT_FOR_TOKEN, 2 : TOKEN_FOR_YT, 3 : YT_FOR_TOKEN } | |
| userAddress | Yes | User Address | |
| failSafeRate | Yes | BigInt string of failSafeRate | |
| makingAmount | Yes | BigInt string of making amount | |
| lnImpliedRate | Yes | BigInt string of lnImpliedRate (natural logarithm of the implied rate) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Strong disclosure that this generates a transaction payload rather than executing on-chain ('Generate the transaction payload...'), and clarifies the state change ('order becomes invalid once the cancellation transaction is confirmed'). Documents return structure inline. Missing: explicit mention of signing requirements or gas implications, though 'transaction payload' implies execution is separate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Compact and information-dense with zero waste. Structure flows logically: action → mechanism → input requirements → state effect → return value. Each sentence serves distinct purpose without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a 14-parameter blockchain operation with no output schema. Compensates by inline-documenting the return structure ('Returns: { method... }') and explaining the on-chain effect. Could improve with error conditions or edge cases (e.g., already-cancelled orders).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage, establishing baseline 3. The description adds valuable semantic context that parameters represent 'the full signed order struct (from the original order creation)', explaining the cohesion of the 14 fields and their origin from order creation—information the per-field schema descriptions don't convey.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent clarity with specific verbs ('Cancel', 'Generate the transaction payload') and resource ('limit order'). The phrase 'one single' effectively distinguishes it from the sibling tool 'sdk_controller_cancel_all_limit_orders', clarifying this targets an individual order versus bulk cancellation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implicit guidance by stating 'Pass the full signed order struct (from the original order creation)' which hints that the original order data is required. However, it lacks explicit when-to-use guidance comparing this to the 'cancel_all' alternative or prerequisites like transaction signing requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
sdk_controller_convertAInspect
Universal convert function — The Convert API is the recommended endpoint for all Pendle transaction building. It supersedes the individual swap, add/remove liquidity, mint, redeem, transfer, and exit endpoints — handling 21 distinct operations through a unified interface. The action is automatically detected from your tokensIn and tokensOut addresses. See the table below for all supported operations. | Action | tokensIn | tokensOut | Note | |-------------------------------------------------------|-----------------------|-----------------|------------------------------------------------
| Name | Required | Description | Default |
|---|---|---|---|
| chainId | Yes | path parameter: chainId (number) | |
| receiver | No | Recipient address for transaction output | |
| slippage | Yes | Maximum slippage tolerance (0-1, where 0.01 equals 1%) | |
| tokensIn | Yes | Input token addresses, seperate by comma with no spaces | |
| amountsIn | Yes | Input token amounts in wei, seperate by comma with no spaces | |
| needScale | No | Aggregators needScale value, only set to true when amounts are updated onchain. When enabled, please make sure to buffer the amountIn by about 2% | |
| tokensOut | Yes | Output token addresses, seperate by comma with no spaces | |
| aggregators | No | List of aggregator names to use for the swap. If not provided, all aggregators will be used.List of supported aggregator can be found at: [getSupportedAggregators](#tag/sdk/get/v1/sdk/{chainId}/supported-aggregators) | |
| redeemRewards | No | Redeem rewards | |
| useLimitOrder | No | To use limit orders when converting, default to true | |
| additionalData | No | Available fields: `impliedApy`, `effectiveApy`. Comma separated list of fields to return. For example: `field1,field2`. More fields will consume more computing units. | |
| enableAggregator | No | Enable swap aggregator to swap between tokens that cannot be natively converted from/to the underlying asset |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It explains the auto-detection mechanism (action determined from tokensIn/tokensOut pairs) and mentions 21 supported operations, but lacks critical behavioral details such as whether this generates unsigned transactions, requires specific permissions, rate limits, or what failure modes exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The first portion is well-structured and front-loaded with key information, but the description ends abruptly with an incomplete markdown table header ('| Action | tokensIn | tokensOut | Note |...'), which appears truncated and disrupts the structural integrity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex tool handling 21 operations with 12 parameters, the description adequately explains the unification concept and references a table of operations (though truncated). However, it lacks comparison to the v3 sibling variant and omits output format details, which would be particularly important given this is a transaction-building endpoint with no output schema provided.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description adds significant value by explaining that the action type is automatically inferred from the combination of tokensIn and tokensOut addresses, which is crucial semantic context not present in the schema field descriptions alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly defines this as the 'Universal convert function' and the 'recommended endpoint for all Pendle transaction building,' clearly stating it handles 21 distinct operations including swap, mint, redeem, and liquidity management. It effectively distinguishes itself from specific sibling tools like sdk_controller_redeem_interests_and_rewards by stating it supersedes those individual endpoints.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description notes this supersedes specific endpoints (swap, redeem, etc.), it fails to address the sibling sdk_controller_convert_v3, which appears to be an alternative version. It provides no guidance on when to use this version versus the v3 variant, nor explicit exclusion criteria for when specific endpoints are preferred over this unified interface.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
sdk_controller_convert_v3AInspect
Universal convert function (recommended — POST body variant) — The Convert API is the recommended endpoint for all Pendle transaction building. It supersedes the individual swap, add/remove liquidity, mint, redeem, transfer, and exit endpoints — handling 21 distinct operations through a unified interface. The action is automatically detected from your tokensIn and tokensOut addresses. See the table below for all supported operations. | Action | tokensIn | tokensOut | Note | |-------------------------------------------------------|-----------------------|-----------------|--------------
| Name | Required | Description | Default |
|---|---|---|---|
| inputs | Yes | List of input tokens and their amounts | |
| chainId | Yes | path parameter: chainId (number) | |
| outputs | Yes | Output token addresses | |
| receiver | No | Recipient address for transaction output | |
| slippage | Yes | Maximum slippage tolerance (0-1, where 0.01 equals 1%) | |
| needScale | No | Aggregators needScale value, only set to true when amounts are updated onchain. When enabled, please make sure to buffer the amountIn by about 2% | |
| aggregators | No | List of aggregator names to use for the swap. If not provided, default aggregators will be used.List of supported aggregator can be found at: [getSupportedAggregators](#tag/sdk/get/v1/sdk/{chainId}/supported-aggregators) | |
| okxSwapParams | No | okxSwapParams ({ fromTokenReferrerWalletAddress: string, toTokenReferrerWalletAddress: string, feePercent: number, positiveSlippagePercent: number }) | |
| redeemRewards | No | Redeem rewards | |
| useLimitOrder | No | To use limit orders when converting, default to true | |
| additionalData | No | Available fields: `impliedApy`, `effectiveApy`. Comma separated list of fields to return. For example: `field1,field2`. More fields will consume more computing units. | |
| enableAggregator | No | Enable swap aggregator to swap between tokens that cannot be natively converted from/to the underlying asset |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It adds valuable context that the action is 'automatically detected from your tokensIn and tokensOut addresses' and mentions 21 operations, but fails to clarify if this tool submits transactions, requires specific permissions, or has side effects like gas estimation vs. execution.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The prose portion is efficiently front-loaded with key differentiators (recommended, supersedes, POST body). However, the inclusion of a truncated markdown table ('| Action | tokensIn...') that cuts off mid-row creates visual clutter and suggests missing information, detracting from the structural quality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the high complexity (12 parameters including nested objects like okxSwapParams), lack of output schema, and zero annotations, the description provides adequate high-level context but insufficient depth. The 21 operations are mentioned but not enumerated (table truncated), and critical details about error handling, auth, or return values are absent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description mentions tokensIn/tokensOut which helps map to the 'inputs' and 'outputs' parameters, but adds minimal syntax guidance or format details beyond what the schema already provides (e.g., doesn't explain the nested okxSwapParams structure or aggregators format).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly defines this as a 'Universal convert function' for Pendle transaction building, specifying it handles 21 distinct operations (swap, add/remove liquidity, mint, redeem, etc.) through a unified interface. It effectively distinguishes itself from sibling tools like `sdk_controller_swap_pt_cross_chain_v2` and the non-v3 `sdk_controller_convert` by noting it supersedes individual endpoints and is the 'POST body variant'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly labels itself as the 'recommended' endpoint and states it supersedes individual swap, liquidity, mint, and redeem endpoints, providing clear guidance on when to prefer this over siblings. The 'POST body variant' notation helps distinguish from the query-parameter variant. Missing only explicit negative constraints (e.g., 'do not use when...').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
sdk_controller_get_market_spot_swapping_priceBInspect
Get real-time PT/YT swap price of a market — Return price by swapping 1 unit underlying token to PT/ YT, and 1 unit of PT/YT to the underlying token. One unit is defined as 10**decimal. The result is updated every block. Implied APY of the market is also included. Returns: { underlyingToken: string, underlyingTokenToPtRate: object, ptToUnderlyingTokenRate: object, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| market | Yes | path parameter: market (string) | |
| chainId | Yes | path parameter: chainId (number) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Strong disclosure given no annotations: specifies update frequency ('every block'), defines domain unit ('10**decimal'), and documents return structure including implied APY. Acts as substitute for missing output schema. Missing only auth/permission details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information-dense with four distinct sentences covering purpose, unit definition, update frequency, and returns. Front-loaded with purpose. Slightly dense single sentence with em-dash could be split for readability, but no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete for a 2-parameter read operation. Compensates for missing output schema by documenting return fields. Explains domain-specific concepts (unit decimals, PT/YT rates) that prevent misuse.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with type documentation present. Description implies context ('of a market') but adds no specific semantics about parameter format or valid values beyond the schema's 'path parameter' declarations. Baseline 3 appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action 'Get' and resource 'PT/YT swap price' with scope 'of a market'. Explains the calculation mechanism (swapping 1 unit each direction). Could improve by distinguishing from sibling price tools like markets_controller_market_data_v2 or prices_controller_ohlcv_v4.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus alternative market data endpoints. No prerequisites or conditions mentioned despite multiple sibling tools providing similar market information.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
sdk_controller_get_market_tokensAInspect
Get supported tokens for market — Returns the two sets of tokens relevant for a given market: - SY input tokens: tokens accepted by the SY wrapper for minting/redeeming (e.g. USDC for a USDC-based market). - Zap tokens: tokens that can be used as input when buying PT/YT or providing liquidity, routed via aggregators. Call this before building a Convert or Swap request to know which input tokens are valid for a given market. Returns: { tokensMintSy: string[], tokensRedeemSy: string[], tokensIn: string[], ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| market | Yes | path parameter: market (string) | |
| chainId | Yes | path parameter: chainId (number) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It effectively discloses return structure ({ tokensMintSy, tokensRedeemSy, tokensIn... }) and explains what each token set represents functionally (minting SY wrappers vs zap routing for PT/YT purchases). Only minor gap is lack of explicit idempotency or error behavior mention.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly structured: summary sentence, bullet points explaining token types with examples, usage guidance, and return documentation. Every sentence earns its place. Information-dense without redundancy despite explaining domain concepts (SY wrappers, PT/YT).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Fully compensates for missing output schema by documenting return object structure and field meanings inline. Explains necessary domain context (SY vs Zap tokens) and distinguishes from siblings. Complete for a 2-parameter read operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear path parameter descriptions. The description mentions 'for a given market' providing minimal semantic context, but does not need to compensate given the schema completeness. Baseline 3 is appropriate per scoring rules for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'supported tokens for market' and specifically distinguishes the two token categories (SY input tokens and Zap tokens). It differentiates from sibling tools like sdk_controller_convert by explaining this is a prerequisite query to determine valid inputs before building conversion or swap requests.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Excellent explicit guidance: 'Call this before building a Convert or Swap request to know which input tokens are valid for a given market.' This establishes clear workflow ordering and implicitly identifies when NOT to use it (don't call this for executing swaps, use the convert/swap tools instead).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
sdk_controller_get_pt_cross_chain_metadataAInspect
PT cross-chain metadata — Returns metadata for a PT that has been bridged to a spoke chain, including maturity, underlying asset details, and fixed-price AMM parameters. Use this to verify that a bridged PT is supported on the current chain and to fetch its pricing parameters before calling Swap PT cross-chain. Returns: { pairedTokensOut: string[], ammAddress: string }.
| Name | Required | Description | Default |
|---|---|---|---|
| pt | Yes | path parameter: pt (string) | |
| chainId | Yes | path parameter: chainId (number) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Since no annotations exist, the description carries full burden. It compensates well by documenting the complete return structure: 'Returns: { pairedTokensOut: string[], ammAddress: string }' (crucial given no output schema). However, it lacks disclosure on error cases (e.g., unsupported PT), rate limits, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three efficient sentences: first defines purpose with specific fields, second provides workflow guidance, third documents return structure. Front-loaded with resource identification. Minor deduction for the em-dash prefix 'PT cross-chain metadata —' which slightly echoes the tool name without adding new information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
_COMPLETE given the tool's moderate complexity (2 simple params, no nested objects). The description adequately covers purpose, workflow integration, and return values (compensating for the missing output schema). Only significant gap is error handling documentation for invalid PT addresses or unsupported chains.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage ('path parameter: pt', 'path parameter: chainId'), establishing baseline 3. The description adds domain context (referencing 'PT' and 'current chain') but does not specify parameter formats (address vs symbol), validation rules, or example values that would elevate it above the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: states it 'Returns metadata for a PT that has been bridged to a spoke chain' with concrete field examples (maturity, underlying asset details, fixed-price AMM parameters). Clearly distinguishes from generic PT tools by specifying 'cross-chain' and 'spoke chain' context, and explicitly links to the swap sibling tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit workflow guidance: 'Use this to verify that a bridged PT is supported on the current chain and to fetch its pricing parameters before calling [Swap PT cross-chain]...' This clearly defines when to use it (verification/priming) and its relationship to the sibling swap tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
sdk_controller_get_supported_aggregatorsAInspect
Get supported aggregators for a chain — Returns the list of DEX aggregators available on this chain for routing token swaps, along with the additional computing unit cost each one adds to SDK requests. Use this to decide which aggregators to include in Convert/Swap calls via the aggregators query param. You can reduce any aggregator's CU cost to 0 by providing your own API key in the corresponding request header. See Reducing Aggregator Costs for details. Returns: { aggregators: { name: st
| Name | Required | Description | Default |
|---|---|---|---|
| chainId | Yes | path parameter: chainId (number) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full behavioral burden and succeeds well. It discloses computing unit costs ('additional computing unit cost each one adds'), authentication optimization (API key in header reduces CU to 0), and documents a partial return structure. Missing 5th point only because the return schema description is cut off and lacks pagination details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information-dense with zero waste: front-loaded action, cost disclosure, usage guidance, optimization tip, and return hint. The dash structure works well. Minor deduction for the truncated return documentation at the end.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero annotations and no output schema, the description adequately compensates by explaining the economic model (CU costs), authentication requirements, and return data purpose. It successfully contextualizes the tool within the broader SDK workflow (Convert/Swap configuration).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (chainId described as 'path parameter: chainId (number)'). The description mentions 'for a chain' which contextually maps to the parameter, but adds no format constraints, examples, or semantic details beyond the schema. Baseline 3 is appropriate when schema does heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: 'Get supported aggregators for a chain' uses a concrete verb and resource. It clearly distinguishes from sibling tools like sdk_controller_convert by clarifying this retrieves configuration data (available DEX aggregators) rather than executing swaps.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'Use this to decide which aggregators to include in Convert/Swap calls via the `aggregators` query param.' This directly maps to sibling sdk_controller_convert/sdk_controller_swap_pt_cross_chain_v2 tools and specifies the interface contract.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
sdk_controller_redeem_interests_and_rewardsAInspect
Redeem rewards and interests from positions — Generate a transaction payload to claim all accrued interest and incentive rewards across multiple positions in a single call. Specify which positions to claim from by passing arrays of: - sys: SY token addresses (to claim SY interest) - yts: YT token addresses (to claim YT interest and rewards) - markets: LP market addresses (to claim LP rewards) Useful for portfolio management bots or dashboards that batch-claim on behalf of users. Returns: { method: string, contractCallParamsName: string[], contractCallParams: unknown[][], ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| sys | No | Use comma separated values to search by multiple addresses | |
| yts | No | Use comma separated values to search by multiple addresses | |
| chainId | Yes | path parameter: chainId (number) | |
| markets | No | Use comma separated values to search by multiple addresses | |
| receiver | Yes | The address to receive the output of the action |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries full disclosure burden and successfully clarifies critical behavior: it generates a transaction payload (not executing on-chain), performs batch operations in a single call, and claims 'all accrued' rewards (scope). Could improve by mentioning signing requirements, gas considerations, or failure atomicity.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear information hierarchy: opening summary, bullet-pointed parameter semantics, use-case context, and return type documentation. No wasted sentences; appropriate length for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Compensates for missing output schema by documenting the return structure (method, contractCallParams, etc.). Covers prerequisites implicitly through the 'generate payload' framing, though could explicitly mention that returned payloads require signing/broadcasting to complete the claim.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 100% schema coverage (baseline 3), the description adds substantial semantic value by mapping each string parameter to its financial concept: sys maps to SY token interest claims, yts to YT interest+rewards, and markets to LP rewards—context entirely absent from the schema's mechanical 'comma separated addresses' descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description explicitly states the tool generates a transaction payload to claim/redeem accrued interest and rewards across multiple positions, using specific verbs (redeem, claim, generate) and distinguishing clearly from sibling tools like sdk_controller_convert (swapping) or limit_orders_controller_* (order management).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear usage context identifying target users ('portfolio management bots or dashboards that batch-claim on behalf of users'), establishing when this batching tool is appropriate. Lacks explicit alternatives or exclusion criteria (e.g., when to use single-position claiming instead), preventing a score of 5.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
sdk_controller_swap_pt_cross_chain_v2AInspect
Swap PT using fixed price AMM for cross-chain operations — Swap a bridged PT on a spoke chain back to a regular token using the fixed-price PT AMM. This is specifically for cross-chain PT redemption flows where PT has been bridged to a non-primary chain. The fixed-price AMM prices PT based on time-to-maturity, independent of pool liquidity. Supports both: - Exact PT input → token output - Exact token output ← PT input (pass the required token amount; the system calculates PT needed) Only valid on supported spoke chains. Returns: { method: string, contractCallParamsName: string[], contractC
| Name | Required | Description | Default |
|---|---|---|---|
| pt | Yes | PT token address | |
| chainId | Yes | path parameter: chainId (number) | |
| receiver | No | Recipient address for transaction output | |
| slippage | Yes | Maximum slippage tolerance (0-1, where 0.01 equals 1%) | |
| tokenOut | Yes | Output token address | |
| exactPtIn | Yes | Exact amount value PT in | |
| aggregators | No | List of aggregator names to use for the swap. If not provided, all aggregators will be used.List of supported aggregator can be found at: [getSupportedAggregators](#tag/sdk/get/v1/sdk/{chainId}/supported-aggregators) | |
| enableAggregator | No | Enable swap aggregator to swap between tokens that cannot be natively converted from/to the underlying asset |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully explains the pricing model ('based on time-to-maturity, independent of pool liquidity') and domain constraints, but fails to explicitly clarify whether this generates a transaction payload or executes directly, and omits auth/prerequisites disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The em-dash construction creates slight redundancy ('Swap PT... — Swap a bridged PT'). The content is front-loaded with purpose, but the description is truncated mid-stream ('contractC'), cutting off the return value documentation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex cross-chain DeFi operation with no annotations, the description covers the essential domain context (pricing mechanism, chain constraints, redemption flows). The truncation of the return documentation and lack of prerequisite/approval guidance prevent a 5.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% description coverage, the description adds critical usage context explaining the dual-mode support for exact PT input vs exact token output, clarifying how the system calculates requirements in the latter case—a semantic layer not evident from the parameter schema alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description precisely identifies the operation ('Swap PT'), mechanism ('fixed price AMM'), and specific domain context ('cross-chain operations', 'spoke chain', 'bridged PT redemption flows'), clearly distinguishing it from generic swap tools like sdk_controller_convert.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly defines the specific use case ('cross-chain PT redemption flows where PT has been bridged to a non-primary chain') and constraints ('Only valid on supported spoke chains'), though it does not explicitly name sibling alternatives to avoid (e.g., when to use convert_v3 instead).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
statistics_controller_get_distinct_user_from_tokenBInspect
Get distinct users for a specific token — Returns a list of unique wallet addresses that have interacted with a specific token across Pendle markets. Use the optional chainId parameter to filter results to a specific chain, or omit it to get users across all chains. Common use cases include: - Token holder analysis - User adoption metrics - Market participation statistics Returns: { users: string[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | Token address to query. Can be any Pendle token (PT, YT, SY, LP). Address will be normalized to lowercase. | |
| chainId | No | Optional chain ID to filter results. If provided, returns only users who interacted with the token on the specified chain. If omitted, returns users across all chains where the token exists. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It specifies the return structure '{ users: string[] }' and cross-chain behavior, but fails to mention critical operational details like pagination limits, maximum result sets, or the definition of 'interacted' (trades vs. transfers vs. holds).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with clear front-loading of the core purpose. While the bullet list formatting (using dashes in plain text) is slightly irregular, all sentences earn their place by describing functionality, parameter behavior, use cases, and return format without excessive verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no output schema, the description compensates by specifying the return shape. However, for a 'get distinct users' endpoint that could potentially return large datasets, the omission of pagination behavior, result limits, or timeframe scope leaves a significant gap in contextual completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description explains chainId optionality ('filter results to a specific chain, or omit it to get users across all chains'), but this largely restates the schema description without adding syntax details, format constraints, or examples beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Get distinct users for a specific token' and clarifies it returns 'unique wallet addresses that have interacted with a specific token across Pendle markets.' This specific verb+resource combination distinguishes it from sibling tools focused on limit orders, market data, or transactions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides common use cases (token holder analysis, user adoption metrics) and explains when to use the optional chainId parameter versus omitting it. However, it lacks explicit guidance on when not to use this tool or which sibling alternatives might be more appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
transactions_controller_get_transactionsBInspect
Get user transaction history — Returns the on-chain transaction history for a user address across all Pendle operations. Results are paginated. Use the resumeToken from the response to fetch the next page. Data may lag chain tip by a few minutes due to indexing. Returns: { total: number, results: { chainId: number, market: string, user: string, timestamp: string, action: "addLiquidityDualTokenAndPt" | "addLiquiditySinglePt" | "addLiquiditySingleToken" | "addLiquiditySingleTokenKeepYt" | "removeLiquidityDualTokenAndPt" | "removeLiquidityToPt" | "removeLiquiditySingleToken" | "mintPy" | "redee
| Name | Required | Description | Default |
|---|---|---|---|
| skip | No | Number of results to skip. The parameter is capped at 1000. | |
| user | Yes | query parameter: user (string) | |
| limit | No | Maximum number of results to return. The parameter is capped at 1000. | |
| market | No | Market address | |
| chainId | No | Chain ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully notes the indexing lag ('Data may lag chain tip by a few minutes'), but the pagination mechanism description appears inconsistent with the actual schema parameters.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is poorly structured with a partial, cut-off JSON output schema embedded in the text ('Returns: { total: number...'). The mention of resumeToken alongside the actual skip/limit parameters creates confusion, and the return type documentation is fragmented.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 5-parameter read operation with no output schema, the description attempts to describe return values (though cut off) and covers data freshness concerns. However, the pagination documentation conflicts with the actual input parameters, leaving functional gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the structured fields already document all parameters adequately. The description mentions 'user address' which contextualizes the primary required parameter, but otherwise adds minimal semantic value beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'on-chain transaction history for a user address across all Pendle operations', using a specific verb and resource combination. It effectively distinguishes this user-specific history endpoint from the sibling general transactions controller.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While it provides pagination instructions ('Use the resumeToken from the response'), this contradicts the input schema which implements skip/limit parameters instead. It completely fails to differentiate when to use this tool versus the sibling transactions_controller_transactions_v5.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
transactions_controller_transactions_v5AInspect
Get market transactions by address — Return transactions with: user action (long or short yield, add or remove liquidity), valuation, implied apy. Pagination: This endpoint supports cursor-based pagination using resumeToken. The response includes a resumeToken field that can be used in the next request to fetch the next page of results. This is more efficient than using skip for large datasets. Returns: { total: number, resumeToken: string, limit: number, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| skip | No | Use `resumeToken` instead. | |
| type | No | Transaction type to filter by. Valid values: `TRADES`, `LIQUIDITY`. | |
| limit | No | Maximum number of results to return. The parameter is capped at 1000. | |
| action | No | Specific transaction action to filter by. Valid values: `LONG_YIELD`, `SHORT_YIELD`, `ADD_LIQUIDITY`, `REMOVE_LIQUIDITY`. | |
| address | Yes | path parameter: address (string) | |
| chainId | Yes | path parameter: chainId (number) | |
| minValue | No | Minimum transaction value filter in USD | |
| txOrigin | No | Address of the transaction executor | |
| resumeToken | No | Resume token for pagination. Use this to continue a previous query. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. It successfully documents the pagination mechanism (cursor-based vs offset), hints at performance characteristics (efficiency for large datasets), and describes the response structure pattern ({total, resumeToken, limit, ...}). Could improve by mentioning rate limits or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear information hierarchy: purpose first, return payload details, pagination mechanics, then response schema. Slightly verbose in repeating resumeToken behavior already documented in schema, but the explicit JSON return structure documentation is justified given no output schema exists.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Excellent compensation for missing output schema by detailing the response envelope structure and transaction field contents. With 9 parameters (2 required) and no annotations, the description adequately covers the pagination behavior critical for this data-heavy endpoint. No significant gaps remain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage (baseline 3), but description adds valuable semantic mapping: it translates the abstract 'action' parameter values into business terms ('long or short yield, add or remove liquidity') and reinforces the pagination strategy relationship between 'resumeToken' and 'skip' parameters. This provides context beyond raw schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Get') and resource ('market transactions'), with detailed return payload description (user actions, valuation, implied APY). Deducted one point because it fails to distinguish from sibling tool 'transactions_controller_get_transactions', leaving ambiguity about which transaction endpoint to use.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit pagination guidance stating cursor-based 'resumeToken' is 'more efficient than using skip for large datasets', which functions as a clear when-to-use recommendation. However, lacks guidance on when to choose this v5 endpoint versus the sibling 'get_transactions' endpoint or how to combine filters (type vs action).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!