Skip to main content
Glama

Server Details

XRPLOracle - 31 XRP Ledger tools: payments, DEX, AMM, hooks, NFTs, validators, DIDs.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
ToolOracle/xrploracle
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.4/5 across 31 of 31 tools scored. Lowest: 2.3/5.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct XRPL feature or aspect, with clear and detailed descriptions. Even the many RLUSD tools are differentiated by specific functions (supply, integrity, holders, etc.), minimizing confusion.

Naming Consistency5/5

All tools follow the 'xrpl_' prefix with descriptive lowercase_underscore names. The pattern is uniform, whether noun-based or verb+noun, with no mixing of naming conventions.

Tool Count4/5

31 tools is on the higher end, but the server covers a very broad domain (XRPL oracle with many sub-areas). Each tool has a specific purpose and feels justified, though the count could be slightly trimmed.

Completeness5/5

The tool set covers a comprehensive range of XRPL read/analysis operations: accounts, AMM, DEX, escrow, NFTs, payments, gateways, compliance, RLUSD, RWA, server state, and more. There are no obvious gaps for an intelligence-oriented server.

Available Tools

31 tools
xrpl_account_intelBInspect

XRPL account intelligence: XRP balance, trust lines, open offers, escrows, flags

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesXRPL classic address (r...)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits. It lists returned data but does not state that the tool is read-only, indicate performance implications, mention API rate limits, or explain special requirements. The lack of explicit read-only declaration is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no fluff. Front-loaded with purpose and key data categories. Every part of the description conveys essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple tool with one input. Lists data types returned, but lacks details on output format, limitations (e.g., only classic addresses), or whether it aggregates multiple API calls. Could provide more context for an agent to set expectations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has one parameter 'address' with 100% description coverage. The tool description adds no extra semantics beyond the schema; it only describes output. Baseline of 3 is appropriate since schema is fully documented but no additional parameter guidance is given.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool provides intelligence on an XRPL account, specifying exact data components: XRP balance, trust lines, open offers, escrows, flags. It distinguishes itself from sibling tools like xrpl_trust_lines or xrpl_escrow_check by offering a composite overview.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. The description does not specify scenarios where this tool is preferred over specific tools like xrpl_trust_lines or xrpl_escrow_check. Implicitly, it may be used for a broad overview, but this is not stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

xrpl_amm_poolsBInspect

XRPL AMM liquidity pool intelligence: reserves, trading fee, LP token supply

ParametersJSON Schema
NameRequiredDescriptionDefault
assetNoRLUSD
asset2NoXRP
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are absent, so the description is the sole source for behavioral traits. It does not disclose whether the tool is read-only, requires any authentication, has side effects, or any other constraints. The statement is purely functional.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that directly states the tool's purpose. It is concise and free of redundancy. However, it may be too terse, sacrificing completeness for brevity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description should explain return values. It mentions reserves, fee, and supply, providing adequate context for the core function. However, it omits details like scope (single pool vs all pools), pagination, or response structure, leaving gaps for a moderately complex tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, and the description does not explain the parameters (asset, asset2). Although the defaults (RLUSD, XRP) hint at currency pairs, no details on format, allowed values, or relationship are provided. The description adds minimal semantic value over the raw schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it provides liquidity pool intelligence for XRPL AMMs, specifically reserves, trading fee, and LP token supply. This distinguishes it from sibling tools like xrpl_dex_orderbook (orderbook DEX) and xrpl_token_check (token information).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool over alternatives. The description lacks context about preferred scenarios, prerequisites, or exclusions, leaving the agent to infer usage from the tool name and sibling list.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

xrpl_beacon_pulseAInspect

Read the OracleNet beacon. Mesh health, settlement lanes (XRPL + Base), join instructions. The heartbeat of the autonomous agent economy.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries full burden but only states 'Read' with no disclosure of side effects, permissions, rate limits, or return behavior. It implies read-only but lacks explicit safety or behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three concise sentences, front-loaded with the core action. Every clause adds distinct value with no redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero parameters and no output schema, the description provides a high-level overview of what is read but lacks specifics on return format or structure. It is minimally adequate but not fully complete for agent understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are zero parameters and schema coverage is 100%, so baseline is 4. The description adds context by naming the data types (mesh health, settlement lanes, join instructions) without needing to explain parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Read' and the resource 'OracleNet beacon', listing specific data types (mesh health, settlement lanes, join instructions). It distinguishes itself from sibling tools by focusing on beacon/mesh status rather than individual account or token operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus the many sibling tools. It does not mention context, prerequisites, or exclusions, leaving the agent without decision support.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

xrpl_compliance_checkAInspect

XRPL compliance check: freeze status, clawback capability, auth requirements, transfer restrictions. Native MiCA-relevant controls.

ParametersJSON Schema
NameRequiredDescriptionDefault
accountYesXRPL account to check compliance flags
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It lists the compliance aspects checked, suggesting a read-only operation, but does not explicitly state non-destructiveness, auth requirements, rate limits, or return behavior. Adequate but could be more explicit.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two concise sentences with no superfluous words. It is front-loaded with the core purpose and then lists specific checks, making it easy to scan. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with one input parameter and no output schema, the description covers the input and the nature of the check. However, it does not mention the output format or any expected response structure, which would help an agent interpret results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear description for the single parameter. The tool's description enriches the parameter's context by listing what compliance means, but does not add format or constraints beyond the schema. Baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the tool's purpose with specific verbs ('check') and resources ('compliance flags') and lists concrete aspects (freeze status, clawback capability, auth requirements, transfer restrictions), distinguishing it from siblings focused on general account info, tokens, or trust lines.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for compliance checks, but lacks explicit guidance on when to prefer this tool over siblings like xrpl_account_intel or xrpl_token_check. No exclusions or alternative recommendations are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

xrpl_dex_orderbookCInspect

XRPL native DEX order book: bids/asks for any currency pair

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
base_issuerNo
quote_issuerNo
base_currencyNoXRP
quote_currencyNoRLUSD
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description only states that the tool returns bids and asks, but with no annotations to rely on, it should disclose more behavioral traits (e.g., read-only, authentication needs, rate limits, or output format). This is insufficient for safe and correct invocation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, which is concise and front-loaded, but it sacrifices necessary details. It is not overly verbose, but the brevity comes at the cost of completeness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (5 parameters, no output schema, no annotations), the description is far from complete. It does not explain the output structure, parameter interactions, or any edge cases. A more detailed description is needed for an agent to use it reliably.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 5 parameters with 0% description coverage, so the description should compensate. It only vaguely clarifies that the tool works for any currency pair, but fails to explain the meaning of 'limit', 'base_issuer', or 'quote_issuer'. The parameter names alone are insufficient.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves the XRPL native DEX order book (bids/asks) for any currency pair. It uses specific verbs 'order book' and identifies the resource, effectively distinguishing it from sibling tools like xrpl_amm_pools or xrpl_token_check.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No usage guidelines are provided. The description does not indicate when to use this tool versus alternatives, nor does it mention prerequisites or conditions. There is no guidance on when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

xrpl_escrow_checkAInspect

Check active XRPL escrows for an address. Shows locked amounts, conditions, and deadlines.

ParametersJSON Schema
NameRequiredDescriptionDefault
ownerYesXRPL address to check escrows for
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It describes a read-only query but does not explicitly confirm non-destructiveness, or mention any authentication, rate limits, or error conditions. It is adequate but not thorough.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the purpose, and contains no filler. Every word adds value, making it highly concise and efficient for an agent to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, the description is functional but lacks details about the response format, possible errors, or typical usage scenarios. It is sufficient for basic understanding but not complete for complex decision-making.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a description for the only parameter (owner). The tool description does not add additional meaning beyond the schema for this parameter, meeting the baseline. The description's mention of 'locked amounts, conditions, and deadlines' refers to output, not parameter semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool checks active escrows for an address and lists the key information shown (locked amounts, conditions, deadlines). This distinguishes it from siblings like xrpl_escrow_create and xrpl_escrow_monitor, which perform different actions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the tool should be used when checking escrows for an address, but it does not explicitly state when to use it versus siblings or provide exclusions. Without explicit guidance, the agent must infer from context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

xrpl_escrow_createAInspect

Create an XRPL native escrow for agent-to-agent deals. No smart contract needed — XRPL has built-in escrow. Lock XRP, set conditions, auto-release on fulfillment.

ParametersJSON Schema
NameRequiredDescriptionDefault
hoursNoHours until escrow can be finished
deal_typeNoType of dealcompliance_evidence
amount_xrpNoAmount to escrow in XRP
destinationNoRecipient XRPL addressrJffixdE2JGWGf12Rh9D9kjDgd6jVxVpzD
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description must fully convey behavioral traits. It mentions creation, lock, conditions, and auto-release, but fails to disclose failure modes, reversibility, fees, or permissions needed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose, additive benefit in second sentence. No fluff or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists and description does not explain return values or post-creation steps. Conditions are mentioned but not detailed. Adequate for basic understanding but incomplete for advanced usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with descriptions for all four parameters. The description adds minimal extra meaning beyond 'set conditions' which is vague. No parameter details are elaborated beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool creates an XRPL native escrow for agent-to-agent deals, using specific verbs and resource. It distinguishes itself from sibling tools like xrpl_escrow_check and xrpl_escrow_monitor by focusing on creation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description implies usage for locking XRP with conditions, but lacks explicit guidance on when not to use it or alternatives. Sibling tools exist but no comparison is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

xrpl_escrow_monitorCInspect

XRPL escrow intelligence: locked XRP amounts, conditions, expiry, total value

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so the description must fully disclose behavior. It implies a read-only monitoring operation but does not explicitly state non-destructiveness, authentication needs, or whether data is real-time or historical. Lacks detail on scope (e.g., single account vs. network).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no wasted words. Concisely conveys the tool's purpose and key data points. Front-loaded with 'XRPL escrow intelligence' to immediately orient the agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given one parameter, no output schema, and no annotations, the description leaves significant gaps. Does not specify return format, pagination, limits, or whether it aggregates across addresses. Incomplete for an agent to use effectively without external knowledge.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Single parameter 'address' is a string with no schema description (0% coverage). The tool description does not explain what address means (e.g., account address, escrow ID). Fails to add meaning beyond the bare schema definition.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it deals with XRPL escrows, mentioning locked amounts, conditions, expiry, and total value. It distinguishes from siblings like xrpl_escrow_check and xrpl_escrow_create by focusing on monitoring multiple aspects instead of single operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool over siblings like xrpl_escrow_check. The description does not clarify if this is for monitoring a specific account's escrows or a general overview, nor does it contrast with other escrow tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

xrpl_gateway_balancesAInspect

XRPL gateway/issuer analysis: total obligations, hot wallet balances, compliance capabilities (freeze, clawback, require auth).

ParametersJSON Schema
NameRequiredDescriptionDefault
accountYesGateway/issuer XRPL address
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the burden of disclosing behavior. It states the kind of data returned (obligations, balances, capabilities) but does not explain if it is read-only, requires authentication, or has any side effects. The description is moderately informative but lacks depth.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single sentence of 15 words efficiently conveys the tool's purpose and primary outputs. No unnecessary information is included.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no output schema), the description lists three output categories but omits details like return format, error handling, or usage limits. It is adequate but not comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage (one parameter described as 'Gateway/issuer XRPL address'). The description does not add additional meaning beyond the schema, achieving the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool analyzes a gateway/issuer, specifying three key outputs: total obligations, hot wallet balances, and compliance capabilities. It distinguishes itself from sibling tools like xrpl_compliance_check by covering broader gateway metrics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus its siblings, nor any prerequisites or exclusions. It does not mention alternatives or contextual cues for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

xrpl_iso20022CInspect

ISO 20022 mappable XML structure for RLUSD evidence. Banking-grade data format for regulated institutions. MiCA/DORA-ready.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description must fully disclose behavior. It mentions 'banking-grade data format' but does not state whether the tool is read-only, requires authentication, or what side effects occur.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with no waste, but the first sentence is jargon-heavy. Could be more direct, but overall efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema and zero parameters, the description should clearly explain what the tool returns. It mentions 'XML structure' but lacks specifics on contents or use case, leaving gaps despite low complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has zero parameters, so the description has no parameter burden. The baseline for zero parameters is 4, and the description adds some context about the output format, which is acceptable.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses vague phrasing like 'ISO 20022 mappable XML structure' without specifying an action. It does not clarify whether the tool generates, validates, or retrieves this structure. Among siblings like xrpl_rlusd and xrpl_compliance_check, its unique role is unclear.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. Many siblings relate to RLUSD, but no distinctions or use-case context are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

xrpl_ledger_dataCInspect

XRPL on-chain object scanner: explore DIDs, AMMs, MPTs, escrows, offers, checks, NFTs as native ledger objects.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeNoObject type filter: account, amm, check, did, escrow, nft_offer, offer, payment_channel, mpt_issuance
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description must carry full burden. It implies a read-only operation ('scanner') but does not explicitly state safety, permissions, or side effects. The lack of explicit behavioral disclosure is a gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that front-loads the core purpose. Every word is relevant and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having a single parameter, the tool lacks output schema and annotations. The description does not mention return format or provide additional context for the various object types, leaving incomplete guidance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with the 'type' parameter having a description listing valid values. The description paraphrases some of these values but adds no new meaning beyond the schema, resulting in a baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool is an 'on-chain object scanner' and enumerates specific object types (DIDs, AMMs, MPTs, escrows, offers, checks, NFTs). While it distinguishes from some siblings by covering multiple objects, it does not explicitly differentiate from sibling tools like xrpl_amm_pools or xrpl_nft_intel.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus siblings. The description only states what it does, not the appropriate context or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

xrpl_nft_intelAInspect

XRPL NFT intelligence (XLS-20): collection stats, holdings, mutable URIs, transfer fees. Native ledger NFTs — no smart contract risk.

ParametersJSON Schema
NameRequiredDescriptionDefault
accountNoXRPL account to check NFTs for (optional)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It adds that the tool covers 'native ledger NFTs — no smart contract risk,' which hints at a safety profile. However, it does not disclose whether the tool is read-only, potential rate limits, or required authentication. The word 'intelligence' suggests read access, but it is not explicit.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, information-dense sentence that clearly conveys the tool's purpose and key features without any superfluous words. Front-loaded with the standard and data types.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has one optional parameter and no output schema, the description is fairly complete. It explains the scope (NFT intel, specific data points) and a key differentiator (no smart contract risk). However, it does not describe the output format or any potential limitations, so it is slightly lacking for a comprehensive picture.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for the single 'account' parameter. The description does not add any additional meaning beyond what the schema's description provides, so it meets the baseline for well-documented schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it is for XRPL NFT intelligence under the XLS-20 standard, listing specific capabilities like collection stats, holdings, mutable URIs, and transfer fees. It explicitly differentiates from smart contract NFTs by noting 'native ledger NFTs — no smart contract risk.' This distinguishes it from sibling tools like xrpl_nft_offers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for NFT-related queries by listing the data types, but it does not explicitly state when to use this tool versus alternatives like xrpl_nft_offers. No direct comparison or when-not guidance is provided, leaving the agent to infer based on the listed features.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

xrpl_nft_offersAInspect

XRPL NFT buy/sell offers for a specific NFToken. Shows all active marketplace offers.

ParametersJSON Schema
NameRequiredDescriptionDefault
nft_idYesNFTokenID to check offers for
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description should carry the burden of behavioral disclosure. It mentions 'active marketplace offers' but lacks details on data freshness, pagination, limits, or error handling, offering minimal behavioral insight.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with two sentences, no wasted words, and front-loaded with the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple input (one required param) and no output schema, the description is adequate but incomplete. It does not hint at the structure of the returned offers (e.g., price, owner), leaving some ambiguity for the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear description for nft_id. The tool description adds no extra meaning beyond the schema, warranting the baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool shows buy/sell offers for a specific NFToken, using a specific verb ('shows') and resource ('offers'). It distinguishes from siblings as no other tool focuses on NFT offers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when checking offers for an NFT, but does not provide explicit when-not-to-use scenarios or differentiate from related tools like xrpl_nft_intel, which might also provide NFT details.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

xrpl_overviewAInspect

XRPL network overview: XRP price, ledger stats, reserve requirements, base fee

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It lists output categories but omits behavioral traits like data freshness, caching, or prerequisites. For a parameterless read tool, this is adequate but not thorough.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence. Every word adds value, with no waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple overview tool with no parameters, schema, or annotations, the description is reasonably complete. It lists what the tool returns, though it lacks context on prerequisites or performance characteristics.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are zero parameters, so per guidelines baseline is 4. No additional parameter information is needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides an overview of the XRPL network and explicitly lists four key components (XRP price, ledger stats, reserve requirements, base fee). It distinguishes itself from sibling tools that focus on specific aspects.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use when a network overview is needed, but provides no explicit guidance on when to use this tool versus siblings, nor when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

xrpl_path_findBInspect

XRPL cross-currency path finding: discover optimal payment routes using DEX + AMM liquidity. Atomic cross-currency settlement.

ParametersJSON Schema
NameRequiredDescriptionDefault
amountNoAmount to deliver
issuerNoCurrency issuer (if not XRP)
currencyNoDestination currency (default: USD)
source_accountYesSender XRPL address
destination_accountYesReceiver XRPL address
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so the description must disclose behavioral traits. It mentions 'atomic cross-currency settlement,' which could imply a transactional operation, but it's unclear if this tool performs settlement or just path finding (likely read-only). No mention of side effects, permissions, or whether it's a query or execution.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences front-load the core purpose and key detail (using DEX + AMM liquidity). Every sentence adds value without redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has no output schema, so the description should explain return values (paths, offers, etc.). It does not mention what the tool returns or how to interpret results, leaving a significant gap for an agent to use correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters. The description does not add additional semantic context beyond what the schema provides, meeting the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: cross-currency path finding using DEX + AMM liquidity. It distinguishes from sibling tools like xrpl_dex_orderbook (order book data) and xrpl_amm_pools (AMM pool info) by focusing on optimal payment route discovery.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for cross-currency settlement via DEX and AMM, but does not explicitly state when to use this tool versus alternatives. It lacks guidance on when not to use it or mention of specific prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

xrpl_payment_intelCInspect

XRPL cross-border payment intelligence: routing, ODL, settlement speed, cost

ParametersJSON Schema
NameRequiredDescriptionDefault
amount_xrpNo
to_currencyNoEUR
from_currencyNoUSD
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description bears full burden for behavioral disclosure. It only lists topics (routing, ODL, etc.) but does not explain what actions the tool performs (e.g., fetches data, computes statistics), side effects, or performance implications.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence. While not verbose, it could be restructured with key terms or bullets for better scannability, but it remains efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, no output schema, and minimal parameter info, the description is incomplete. It omits return value details, parameter effects, and prerequisites, leaving significant gaps for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, yet the description does not elaborate on the three parameters (amount_xrp, to_currency, from_currency). It fails to add meaning beyond the schema's defaults and types.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides XRPL cross-border payment intelligence covering routing, ODL, settlement speed, and cost. It differentiates from siblings like xrpl_path_find and xrpl_settlement_status by focusing on broader payment intelligence.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. The description lacks explicit when-to-use or when-not-to-use statements, leaving the agent to infer context from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

xrpl_quantum_joinCInspect

Join OracleNet via your XRPL wallet. One call: instant Trust Passport, 1,229 compliance tools, cross-chain settlement (XRPL + Base). The gateway to the autonomous agent economy.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_nameNoYour agent nameXRPL Agent
xrpl_addressYesYour XRPL address (starts with r)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits. It states the tool performs multiple actions (join, get passport, compliance, settlement) but does not detail state changes, authorization needs, fees, or reversibility. The marketing language obscures practical behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is concise (two sentences) and front-loaded with the action. However, it includes promotional phrases ('1,229 compliance tools', 'gateway to the autonomous agent economy') that add little value for tool invocation. Slightly over-marketed but still efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple inputs (2 parameters, no output schema), the description provides a high-level overview but omits crucial details like return format, error handling, and what 'Trust Passport' or 'compliance tools' means. Adequate but leaves questions for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with both parameters described. The description adds no extra meaning beyond the schema entries. Baseline score of 3 is appropriate since the schema already documents the parameters adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Join OracleNet') and resource ('OracleNet via your XRPL wallet'). It explains the outcome: instant Trust Passport, compliance tools, cross-chain settlement. However, it does not explicitly distinguish itself from sibling tools, though its unique purpose makes confusion unlikely.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives. The description frames it as a gateway but provides no conditions, prerequisites, or exclusions. An agent lacks context on when to invoke this over other tools like xrpl_compliance_check or xrpl_settlement_status.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

xrpl_rlusdAInspect

RLUSD stablecoin compliance and risk intelligence: peg health, supply, NYDFS compliance

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It implies a read-only query ('intelligence') but does not explicitly state whether it has side effects, rate limits, or dependencies. Given zero parameters, the lack of explicit safety claims is a minor gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence with no wasted words. It efficiently conveys the tool's purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a parameterless tool with no output schema or annotations, the description provides key topics (peg health, supply, NYDFS compliance) but does not specify the format or scope of the return value. It is adequate for an overview tool but could be more precise.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has no parameters, so schema coverage is effectively 100%. The baseline is 4, and the description adds no parameter details, which is acceptable. No improvement needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool is about RLUSD stablecoin compliance and risk intelligence, specifying peg health, supply, and NYDFS compliance. It uses specific nouns but lacks an explicit verb. It distinguishes from more specific sibling tools like xrpl_rlusd_supply and xrpl_rlusd_integrity, though some overlap remains.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus siblings such as xrpl_rlusd_holder or xrpl_rlusd_anchor. The description implies it's for overview information, but does not specify exclusions or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

xrpl_rlusd_anchorAInspect

RLUSD on-chain attestation anchoring: 86+ XRPL transactions anchoring SHA-256 hashes of attestation data. Unfalsifiable audit trail.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description must disclose behavioral traits. It implies a read-only retrieval of attestation records, but does not explicitly state whether the tool performs writes or requires specific permissions. The phrase 'unfalsifiable audit trail' suggests immutability, but the exact behavior (query vs. submission) is ambiguous. This is adequate but not thorough.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with two short sentences that convey the core function, scope, and a supporting detail (86+ transactions). Every word serves a purpose, and the key action ('anchoring') is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no parameters and no output schema, the description covers the essential aspects: what it does (anchoring), what data it uses (SHA-256 hashes), and its scale. It could be slightly more explicit about the return format (e.g., list of transaction hashes), but it is nearly complete for a simple query tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters, so no additional semantics are needed. The description does not reference parameters, which is acceptable since none exist. A baseline of 4 is appropriate as the description adds no confusion about parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: anchoring SHA-256 hashes of attestation data via XRPL transactions, forming an unfalsifiable audit trail. It distinguishes itself from sibling tools like xrpl_rlusd by focusing specifically on anchoring, and provides a concrete detail (86+ transactions) that adds specificity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It does not mention prerequisites, typical use cases, or when to choose another tool from the same server. Without this, the agent might misuse the tool or select an inappropriate sibling.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

xrpl_rlusd_cciAInspect

RLUSD Compliance Confidence Index (CCI): proprietary composite score from peg stability (40%), reserve ratio (35%), market depth (25%). Grade A-F. MiCA-relevant.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses the composite formula but does not state whether the tool is read-only, any authentication needs, rate limits, or update frequency. The behavioral transparency is minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence that front-loads the key information. No wasted words—every part contributes to understanding the tool's purpose and output.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero parameters and no output schema, the description provides the essential purpose and composition. However, it lacks detail on output format, typical values, or how to interpret the grade, leaving some contextual gaps for the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, and the schema coverage is 100%, so the baseline is 4. The description adds value by explaining the composition of the index and grading scale, which provides context beyond the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides a Compliance Confidence Index for RLUSD, specifying the components (peg stability 40%, reserve ratio 35%, market depth 25%) and output (Grade A-F). It effectively distinguishes from sibling tools by focusing on a proprietary composite score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives or exclusions. The mention of 'MiCA-relevant' hints at regulatory context but does not provide explicit usage conditions or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

xrpl_rlusd_enterpriseAInspect

RLUSD enterprise-grade data: freshness scoring, confidence levels, multi-source attribution, compliance flags, schema versioning. Built for regulated institutions.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It describes data characteristics (multi-source attribution, compliance flags) but does not explicitly state safety (e.g., read-only) or side effects. Since the tool takes no input, it is likely safe, but the description lacks explicit behavioral transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, succinct sentence that front-loads the key purpose ('RLUSD enterprise-grade data') and lists essential features. Every word adds meaning; no extraneous text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no parameters and no output schema, the description adequately describes what the tool provides (data attributes). However, it does not explain the return format (e.g., single object, array, or structure), leaving some contextual gaps for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters, so schema coverage is 100%. Per guidelines, with 0 parameters, the baseline is 4. The description does not need to add parameter info, and it correctly omits any.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it provides 'RLUSD enterprise-grade data' with specific attributes like freshness scoring and confidence levels. It differentiates from siblings like xrpl_rlusd by targeting regulated institutions, but lacks a verb (e.g., retrieve) to indicate the action.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for high-quality, attested data via 'enterprise-grade' and 'regulated institutions', but does not explicitly state when to use this tool versus alternatives like xrpl_rlusd or xrpl_rlusd_anchor. No exclusions or alternative comparisons are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

xrpl_rlusd_holdersAInspect

RLUSD holder distribution analysis: whale concentration, top 10/50 holders, concentration risk score. Scanned directly from XRPL trust lines.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions 'scanned directly from XRPL trust lines' implying a read-only operation, but does not explicitly state whether it is safe, idempotent, or has any rate limits. This lack of behavioral context is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that conveys all essential information: the tool's purpose, the specific metrics computed, and the data source. There is no wasted text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter tool with no output schema, the description is adequate but not exhaustive. It specifies the analysis outputs (whale concentration, top holders, risk score) and source, but lacks any mention of output format, update frequency, or behavioral constraints.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, and the schema coverage is 100% (trivially). The description does not add parameter-level detail, but the baseline for no parameters is 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it analyzes RLUSD holder distribution, specifically whale concentration, top 10/50 holders, and concentration risk score. It also notes the data source (XRPL trust lines), which is specific and distinguishes it from sibling tools like xrpl_rlusd_supply or xrpl_rlusd_anchor.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies its use case for holder distribution analysis, but does not explicitly state when to use this tool versus alternatives (e.g., xrpl_rlusd for overall supply). No guidance on exclusions or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

xrpl_rlusd_integrityAInspect

RLUSD multi-source integrity check: cross-references DeFiLlama price, XRPL on-chain supply, XRPL DEX liquidity. Auto-flags peg deviations and supply mismatches.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses the sources used and that it auto-flags peg deviations and supply mismatches, but it does not describe what the output looks like, whether it is read-only, or any side effects. This is adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently conveys the purpose, sources, and behavior with no wasted words. It is front-loaded with the main action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero parameters, no output schema, and the fact that the tool is fairly simple, the description covers the essential aspects. It could benefit from mentioning the output format, but the completeness is high for the complexity level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, and the schema description coverage is 100% (since no parameters). According to the rubric, 0 parameters gives a baseline of 4. The description does not need to add parameter details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it performs a multi-source integrity check for RLUSD, cross-referencing price, supply, and liquidity from specific sources, and auto-flags deviations. This differentiates it from sibling tools like xrpl_rlusd (likely basic info) and xrpl_rlusd_supply (supply-specific).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for integrity checks but does not explicitly state when to use this tool versus alternatives. While the context is clear, there is no explicit exclusion or alternative mention, so it meets the criteria for a clear context without exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

xrpl_rlusd_serviceAInspect

RLUSD CPA-attested transparency reports: reserve ratios, outstanding units, reserve values from Standard Custody (NYDFS-regulated, AICPA standard). Parsed and structured.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It mentions the source and certifications (NYDFS, AICPA) and states data is 'parsed and structured'. However, it does not disclose whether data is live or cached, authentication needs, or error handling. Adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is a single sentence that is clear and front-loaded with the main purpose. It includes specific details but is somewhat dense; could be slightly more concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description reasonably covers return fields (reserve ratios, outstanding units, reserve values). It lacks details on format or structure but is adequate for a parameterless tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist, so baseline is 4. The description adds no parameter information (unnecessary) but provides context about the data returned, which is sufficient.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides RLUSD CPA-attested transparency reports, specifying the data points (reserve ratios, outstanding units, reserve values) and source (Standard Custody). This distinguishes it from sibling tools focusing on other RLUSD aspects.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description lacks explicit guidance on when to use this tool versus alternatives like 'xrpl_rlusd_supply' or 'xrpl_rlusd_holders'. It implies use for transparency reports but does not provide context or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

xrpl_rlusd_supplyAInspect

RLUSD live supply intelligence: on-chain circulating supply from XRPL gateway_balances, mint/burn tracking, supply changes 24h/7d. The deepest RLUSD supply data available.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully disclose behavioral traits. It mentions 'live supply intelligence' but does not explicitly state it is a read-only operation, nor does it address any potential side effects, rate limits, or authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with the key purpose and data provided. Every word contributes value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description should explain return values comprehensively. It covers circulating supply, mint/burn tracking, and changes over periods, but lacks detail on the exact structure or fields returned (e.g., whether it is a single number or time series). Adequate but not fully detailed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 0 parameters, so description cannot add parameter meaning. Baseline for 0 parameters is 4. The description adds context about what data is returned, which is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it provides RLUSD live supply intelligence including on-chain circulating supply, mint/burn tracking, and supply changes over 24h/7d. This distinguishes it from sibling tools like xrpl_rlusd_holders or xrpl_rlusd_anchor.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus other RLUSD-related siblings. It does not provide any when-to-use or when-not-to-use context, leaving the agent to infer based solely on the name.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

xrpl_rlusd_webhookAInspect

RLUSD enterprise webhook system: HMAC-SHA256 signed alerts for depeg, supply mismatch, reserve warnings, new attestations. Exponential backoff + dead letter queue.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the full burden. It discloses important behavioral traits: signed alerts, exponential backoff, dead letter queue. However, it does not explain how the webhook is configured or triggered, which limits transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, concise and front-loaded with the tool's purpose. Every sentence adds value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers the tool's core function and key behaviors, but given no annotations, output schema, or parameters, it could be more complete. For example, it does not mention how to set up the webhook or what actions trigger alerts.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters, so baseline is 4. The description adds value by explaining the tool's purpose and behavior, but since there are no parameters, no additional parameter semantics are needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it is a webhook system for RLUSD, listing specific alert types (depeg, supply mismatch, reserve warnings, new attestations) and technical details (HMAC-SHA256, exponential backoff, dead letter queue). This distinguishes it from sibling tools like xrpl_rlusd (general) and xrpl_rlusd_supply (supply specific).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage (for receiving alerts about RLUSD events) but does not explicitly state when to use this tool versus alternatives (e.g., other monitoring tools). No when-not-to-use or alternative tool names are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

xrpl_rwa_scoringBInspect

XRPL RWA ecosystem scoring: infrastructure (95), DeFi primitives (85), compliance (90), RWA readiness (88). Overall Grade A. The definitive XRPL assessment.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fails to disclose behavioral traits such as read-only nature, data freshness, or side effects. The claim of being 'definitive' is not backed by behavioral context, leaving uncertainty about tool behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two sentences, front-loading key scores. However, the second sentence ('The definitive XRPL assessment.') is somewhat redundant and could be trimmed without losing essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description explains the tool's purpose and provides specific scores, but it does not specify the output format, scoring methodology, or whether the result is dynamic. Given no output schema, more context on return values would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters, and schema coverage is 100%. Per guidelines, baseline is 4 for no parameters. The description does not need to add parameter info, but it adds meaning to the output by listing scores.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the tool as providing scoring for the XRPL RWA ecosystem, listing specific scores and an overall grade. It distinguishes it from sibling tools like xrpl_overview and xrpl_compliance_check by focusing on RWA readiness, but lacks explicit differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description does not mention contexts, prerequisites, or exclusions, leaving the agent to infer usage from the tool name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

xrpl_server_stateAInspect

XRPL network deep state: validators, amendments, fees, load, uptime. Shows all native features (DEX, AMM, NFT, DID, Credentials, MPT, Escrow, Compliance).

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses the kind of data returned (validators, fees, etc.) but does not explicitly state read-only behavior, safety, or any side effects. It adequately describes the tool's output content.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences: first sentence summarizes core purpose, second adds key details. Every word adds value, no redundancy. Front-loaded with the most important information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no parameters, no output schema, and no annotations, the description is reasonably complete. It lists what the tool retrieves. It could optionally clarify the output format (e.g., JSON), but the current description is sufficient for a parameterless tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are zero parameters, and schema coverage is 100% (trivial). The description does not need to add parameter information. Following the guideline, baseline is 4 for zero parameters, and no additional detail is required.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns XRPL network deep state, covering validators, amendments, fees, load, uptime, and all native features (DEX, AMM, NFT, DID, Credentials, MPT, Escrow, Compliance). It distinguishes itself from sibling tools that focus on specific subsystems.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on what the tool covers (broad network state). While it doesn't explicitly exclude alternatives, the extensive list of features implies it's the 'overview' tool, differentiating from siblings targeting specific features. No explicit when-to-use or when-not-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

xrpl_settlement_statusCInspect

XRPL settlement lane: fees, speed, RLUSD supply, liquidity. Why XRPL is the fastest settlement for AI agent commerce.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, and the description fails to disclose any behavioral traits such as data freshness, side effects, permissions, or output size. The promotional sentence adds no transparency and may mislead about the tool's nature.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, the first packs key terms, but the second is promotional and unnecessary. The description could be more concise and factual without the marketing fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema and no parameters, the description is the sole source of context. It fails to specify return format, data source, update frequency, or how the tool compares to similar siblings. Incomplete for an agent to use reliably.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist, so the schema is trivial. The description adds some semantic value by listing data categories (fees, speed, supply, liquidity), but it does not explain output format or how these are presented. Baseline for 0 params is 4, but vagueness reduces score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description mentions 'XRPL settlement lane' and lists topics (fees, speed, RLUSD supply, liquidity), but uses promotional language ('Why XRPL is the fastest...') rather than a specific verb+resource. It lacks clarity on exactly what the tool retrieves or computes, and among many sibling tools, its distinct function is fuzzy.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus siblings like xrpl_rlusd_supply or xrpl_rlusd. The description does not mention prerequisites, alternatives, or context, leaving the agent without decision help.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

xrpl_token_checkCInspect

XRPL issued currency risk check: issuer trust, supply, freeze capability, compliance

ParametersJSON Schema
NameRequiredDescriptionDefault
issuerNo
currencyNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden of disclosure. It does not mention whether the tool is read-only, whether it requires any special permissions, if it has rate limits, or what side effects (none expected) occur. The term 'risk check' implies analysis, but mutability is unclear.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single 11-word sentence that conveys core purpose without fluff. It could be slightly more structured, but it effectively front-loads the tool's function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and zero schema descriptions, the description is too brief. It does not explain the format or meaning of the risk check results, potential error cases (e.g., unknown issuer), or how to interpret the listed aspects (e.g., what a 'pass' means for compliance).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It lists 'issuer trust, supply, freeze capability, compliance' as aspects of the check but never explicitly explains that 'issuer' and 'currency' parameters correspond to the currency's issuer address and currency code. The mapping is left to inference.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states a specific verb ('check') and resource ('XRPL issued currency risk'), and lists covered aspects (issuer trust, supply, freeze capability, compliance). However, it does not differentiate from the sibling 'xrpl_compliance_check', which may overlap in purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like 'xrpl_compliance_check' or 'xrpl_rwa_scoring'. There are no prerequisites, when-not-to-use instructions, or context about required account setup.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

xrpl_trust_linesAInspect

XRPL trust line analysis: all issued currencies an account holds. Shows balances, issuers, compliance status.

ParametersJSON Schema
NameRequiredDescriptionDefault
accountYesXRPL account to analyze
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must cover behavioral traits. It indicates read-only analysis (shows balances) but lacks details on whether it returns all trust lines or only non-zero ones, or if there are rate limits or authentication requirements. Partial transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two concise sentences, front-loaded with the tool's purpose and key outputs. No redundant words or phrases.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema, the description explains that it shows balances, issuers, and compliance status, which covers the main return values. However, it could be more specific about the scope (all trust lines or only active ones) and any filtering. Given the single parameter, it is fairly complete but not exhaustive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The only parameter 'account' is described in the input schema as 'XRPL account to analyze,' which is clear. The tool description adds 'all issued currencies an account holds,' but does not provide additional meaning beyond the schema. Schema coverage is 100%, so baseline is 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it is for 'XRPL trust line analysis' and specifies the output: 'shows balances, issuers, compliance status.' This distinguishes it from sibling tools like xrpl_token_check or xrpl_gateway_balances, which focus on other aspects.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when trust line data is needed, but does not explicitly state when this tool is preferred over siblings such as xrpl_token_check or xrpl_gateway_balances. No exclusions or alternatives are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.