Skip to main content
Glama

DPX — Institutional Cross-Border Settlement

Server Details

AI-native stablecoin settlement rail replacing SWIFT for institutional cross-border payments. 14 tools covering settlement quotes, execution, ESG scoring, oracle status, fee verification, competitor comparison, rail health, investment context, and MPP-gated macro intelligence. Settles via Base mainnet USDC at ~1.385% all-in.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.4/5 across 14 of 14 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose. For example, 'get_quote' and 'settle' are sequential and non-overlapping; 'get_analytics', 'get_oracle_status', 'get_rail_status', and 'get_reliability' each target a different aspect of protocol health. No two tools could be easily confused.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern with underscores. The majority use 'get_' prefix (10 of 14), and the remaining use verbs like 'compare', 'verify', and 'settle'. The naming is predictable and intuitive.

Tool Count5/5

14 tools is appropriate for the domain of institutional cross-border settlement. The set covers manifest, quoting, settlement, fee verification, status lookup, analytics, oracle, rails, reliability, ESG, competitor comparison, and investment context without being excessive.

Completeness5/5

The tool surface thoroughly covers the settlement lifecycle: capability discovery (get_manifest), pricing (get_quote), integrity check (verify_fees), execution (settle), and audit (get_settlement_status). Additionally, it includes monitoring tools (get_analytics, get_oracle_status, get_rail_status, get_reliability) and supporting tools (get_esg_score, compare_to_competitors, get_intelligence, get_investment_context). No obvious gaps for the stated purpose.

Available Tools

11 tools
compare_to_competitorsA
Read-only
Inspect

Compare DPX settlement cost against Stripe cross-border (5.4% + $0.30), Wise (0.40–1.50%), Ripple ODL (0.20–0.50%), Lightspark, SWIFT (2.00–5.00%), PayPal, and bank wire. Returns dollar savings vs each at the current DPX all-in rate (1.385% typical). Also returns GENIUS Act and MiCA compliance status for each competitor.

ParametersJSON Schema
NameRequiredDescriptionDefault
hasFxNoCross-currency? Adds 0.40% FX fee.
esgScoreNoESG score 0–100
amountUsdYesSettlement amount in USD

Output Schema

ParametersJSON Schema
NameRequiredDescription
dpxNo
noteNoContext note on comparison methodology
amountUsdNoSettlement amount compared
comparisonNoPer-competitor comparison keyed by competitor ID
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true and destructiveHint=false, so the description adds context by specifying the output: dollar savings and compliance status. However, it does not disclose potential data freshness, rate limits, or external dependencies, which would enhance transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, front-loaded with the main action (compare against competitors), followed by what is returned. No superfluous words; every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of comparing against 7+ competitors and returning savings and compliance, the description covers the key inputs (no specific mention but schema handles) and outputs explicitly. Output schema exists, but description adequately explains return values.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptive parameter fields (e.g., hasFx with 'Cross-currency? Adds 0.40% FX fee.'). The description does not add meaning beyond the schema, so a baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool compares DPX settlement costs against a specific list of competitors, with a detailed verb 'compare' and resource 'DPX settlement cost against competitors'. It distinguishes itself from sibling tools like get_fee_schedule and get_quote by focusing on competitive comparison rather than just retrieving fees.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for cost comparison but does not explicitly state when to use this tool versus alternatives like get_fee_schedule. No exclusion criteria or when-not-to-use guidance is provided, leaving the agent to infer context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_analyticsA
Read-only
Inspect

Get live DPX performance analytics. Returns current stability score, ESG composite scores, live fee breakdown, oracle health across all data sources, and a settlement readiness assessment. Use for dashboards, reporting, and AI-driven monitoring of protocol health.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
feesNo
esgScoreNoProtocol ESG composite score 0–100
timestampNoISO 8601 analytics timestamp
oracleHealthNoHealth status per oracle data source
stabilityScoreNoCurrent oracle stability score 0–100
settlementReadyNoTrue if conditions are suitable for settlement
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false. Description adds value by clarifying that data is 'live' and 'current'. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: first describes function and output, second describes usage. No unnecessary words. Front-loaded with purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no parameters and an output schema (present), the description covers purpose, returned data, and use cases completely. No gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has zero parameters, so no parameter documentation needed. Baseline 4 applies as schema coverage is 100% and description adds no redundancy.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states 'Get live DPX performance analytics' with a specific verb and resource. It lists the exact data points returned (stability score, ESG composites, fee breakdown, oracle health, settlement readiness). It distinguishes from siblings like get_esg_score or get_fee_schedule which are more specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states use cases: 'dashboards, reporting, and AI-driven monitoring of protocol health.' While it doesn't explicitly contrast with siblings, the context implies this is the comprehensive analytics tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_esg_scoreA
Read-onlyIdempotent
Inspect

Get the live counterparty risk score (ESG-denominated) for a wallet address or the protocol default. Returns Environmental, Social, and Governance risk scores (0–100 each), composite weighted average, and the compliance-adjusted settlement fee percentage this score produces. Updated hourly from 6 institutional data sources: WorldBank, IMF, OECD, UN SDG API, ClimateMonitor, and SEC EDGAR. Required by EU SFDR Principal Adverse Impact reporting and CSRD financed emissions disclosure for institutional clients.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressNoWallet address (0x...) to score. Omit for protocol default.

Output Schema

ParametersJSON Schema
NameRequiredDescription
tierNoESG tier label
feePctNoESG fee percentage applied at settlement
socialNoSocial score 0–100
addressNoScored wallet address or "default"
sourcesNoData sources used
esgScoreNoComposite ESG score 0–100
updatedAtNoISO 8601 last update timestamp
governanceNoGovernance score 0–100
environmentalNoEnvironmental score 0–100
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnly, idempotent, non-destructive. Description adds detailed behavioral traits: returns specific risk scores (0–100 range), composite weighted average, settlement fee percentage, update frequency (hourly), and data sources from 6 institutional sources.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is 4 sentences, front-loaded with main purpose, every sentence adds unique value (return details, update frequency, data sources, regulatory relevance). No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With output schema present, return values need no elaboration. Description covers purpose, parameter usage, data freshness, regulatory context, and data source authority. Complete for a read-only information tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers 100% of parameters with description for 'address'. The tool description adds value by explaining that omitting the address results in the protocol default, which improves clarity beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Get' and resource 'counterparty risk score (ESG-denominated)', clearly states it scores wallet addresses or protocol default, and distinguishes from sibling tools like compare_to_competitors and get_analytics by focusing on ESG risk.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description specifies use cases (EU SFDR, CSRD reporting, institutional clients) and how to use (omit address for default). It lacks explicit when-not-to-use or alternatives among siblings, but the context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_fee_scheduleA
Read-onlyIdempotent
Inspect

Get the complete DPX fee schedule: all components (core/FX/ESG/license), volume discount tiers (Standard / Growth / Institutional / Sovereign), ESG fee table by score, scenario examples, and competitive benchmarks vs Stripe, Wise, SWIFT, and bank wire.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
feesNoFee component definitions
tiersNoVolume discount tiers
examplesNoFee calculation examples
benchmarksNoCompetitor fee benchmarks
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, so the description adds valuable context about the rich output contents (components, tiers, benchmarks), enhancing transparency beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, well-structured sentence efficiently conveys the full scope of the tool's output without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the output schema exists, the description is complete for a parameterless read-only tool, covering all key aspects of the fee schedule.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Tool has zero parameters and schema coverage is 100%. Description does not need to explain parameters; it adds value by detailing output content.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states it retrieves the complete DPX fee schedule and lists all included components, clearly distinguishing it from sibling tools like compare_to_competitors or verify_fees.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the purpose is clear, the description lacks explicit guidance on when to use this tool versus alternatives. No mention of exclusions or context for choosing among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_manifestA
Read-onlyIdempotent
Inspect

Get the DPX protocol manifest. Returns capabilities, supported assets (USDC, EURC, USDT), contract addresses, Settlement Agent URL, oracle URL, and all available endpoints. Call this first to understand what DPX can do.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
agentNoSettlement Agent manifest: name, version, status
oracleNoOracle manifest: name, version, assets, endpoints
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, openWorldHint, idempotentHint, and destructiveHint. The description adds value by specifying the exact content returned (capabilities, supported assets, contract addresses, etc.), which goes beyond the annotation hints and provides concrete behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, both front-loaded with purpose. The first sentence states the action and result, the second provides additional context. Every word is necessary and nothing is extraneous.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (no parameters, rich annotations, output schema present), the description is fully complete. It tells the agent what the tool does, what it returns, and suggests the usage order. No additional information is needed for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters, so schema description coverage is 100%. The description is not required to add parameter information, and it does not. Per guidelines, 0 parameters yields a baseline of 4, which is appropriate here.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get the DPX protocol manifest' and explains what it returns (capabilities, assets, endpoints). It also distinguishes itself by saying 'Call this first to understand what DPX can do,' setting it apart from sibling tools that likely perform more specific operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'Call this first to understand what DPX can do,' which provides clear usage guidance. However, it does not explicitly state when not to use this tool or mention alternatives, though the context of being an initial discovery tool makes that less critical.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_oracle_statusA
Read-onlyIdempotent
Inspect

Get full output from the latest DPX Stability Oracle run. Includes all 9 signal layers: climate, commodities, macro, FX, basket peg, yield curve, infrastructure, war/geopolitical risk, and USD structural health. Includes AI intelligence briefing.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
tierNoOracle tier classification
scoreNoComposite oracle score 0–100
alertsNoActive oracle alerts
statusNoSTABLE | CAUTION | UNSTABLE
signalsNoIndividual signal scores for all 9 oracle layers
briefingNoAI intelligence briefing text
timestampNoISO 8601 oracle run timestamp
chaosRegimeNoTrue if extreme market conditions detected
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare the tool read-only, idempotent, and non-destructive. The description adds behavioral context by listing the 9 signal layers and AI briefing, going beyond annotations. It does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero wasted words. The first sentence states the main action, and the second elaborates on the content. Perfectly front-loaded and concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no parameters, strong annotations, and an existing output schema (not provided but exists), the description is complete enough. It lacks mention of caching or staleness, but openWorldHint suggests dynamic data. Overall, sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are no parameters, and schema coverage is 100%. The description adds no parameter info because none is needed. Baseline for zero parameters is 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's action: 'Get full output from the latest DPX Stability Oracle run.' It specifies the 9 signal layers and AI briefing, distinguishing it from sibling tools like 'get_quote' or 'get_fee_schedule' which serve different purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when the latest full oracle output is needed, but it does not explicitly state when to use this tool versus alternatives, nor provide any exclusions or prerequisites. Given the number of siblings, explicit guidance would be beneficial.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_quoteA
Read-only
Inspect

Get a binding fee quote for a DPX settlement. Returns core fee (0.85%), FX fee (0.40% cross-currency), live ESG fee (0–0.50%), license fee (0.01%), total all-in rate, net amount, oracle status, AI reasoning, and a quoteId valid for 300 seconds. Always call this before settle.

ParametersJSON Schema
NameRequiredDescriptionDefault
hasFxNoTrue if source and destination currencies differ (adds 0.40% FX fee)
esgScoreNoCounterparty risk score 0–100 (ESG-denominated). Required for EU SFDR/CSRD compliance. Score 75 = 0.125% compliance-adjusted fee. Score 100 = 0% fee. Obtain from get_esg_score tool.
amountUsdYesSettlement amount in USD
monthlyVolumeUsdNoMonthly volume for discount tier. $1M+ = Institutional (20% off). $10M+ = Sovereign (30% off).

Output Schema

ParametersJSON Schema
NameRequiredDescription
feesNo
tierNoVolume tier: Standard | Growth | Institutional | Sovereign
quoteIdNoBinding quote ID, valid 300 seconds
amountUsdNoInput settlement amount in USD
expiresAtNoISO 8601 expiry timestamp
reasoningNoAI reasoning for fee calculation
oracleScoreNoOracle confidence 0–100
netAmountUsdNoNet amount after all fees
oracleStatusNoSTABLE | CAUTION | UNSTABLE
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses key behavioral traits not captured in annotations: the quote is binding and valid for 300 seconds. It also enumerates fee components and states it must be called before settlement. Annotations indicate readOnly=true, which is consistent, and no contradictions exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that front-loads the core purpose, then lists return fields and a key behavioral note. Every sentence serves a purpose with no redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema and rich input schema descriptions, the description covers the essential behavioral note (300s validity) and precondition (call before settle). It could be improved by noting that the quote is a simulation until settlement is confirmed, but overall it is fairly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with each parameter having detailed descriptions in the schema itself (e.g., esgScore usage, monthlyVolumeUsd tiers). The tool description adds no additional parameter semantics beyond the schema, so the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get a binding fee quote for a DPX settlement.' It specifies the resource and action, and lists return components. However, it does not explicitly differentiate from siblings like 'get_fee_schedule' or 'verify_fees', which is a minor gap.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes an implicit usage directive: 'Always call this before settle.' It also references the 'get_esg_score' tool in the parameter descriptions. However, no explicit when-not-to-use or alternative tools are mentioned, leaving room for ambiguity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_reliabilityA
Read-onlyIdempotent
Inspect

Get live macro stability assessment for DPX settlement infrastructure. Returns institutional risk score (0–100), status (STABLE/CAUTION/UNSTABLE), peg deviation in basis points, AI reasoning, and PROCEED/CAUTION/HOLD recommendation. Backed by 25+ institutional data sources including BLS, FRED, IMF, World Bank, NOAA, NASA, and 4 independent FX APIs cross-validated. If UNSTABLE or peg deviation ≥ 50 bps, hold large settlements.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
statusNoCurrent stability status
outlookNoShort-term stability outlook
reasoningNoAI reasoning for current status
timestampNoISO 8601 assessment timestamp
pegDeviationNoUSDC peg deviation in basis points
recommendationNoPROCEED | CAUTION | HOLD
stabilityScoreNoOracle stability score 0–100
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false. The description adds value by elaborating on the output data and data sources, and provides an actionable recommendation, without contradicting the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences: first states purpose, second details outputs and data sources, third provides actionable guidance. No redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no parameters, the description fully covers the tool's functionality, outputs, and usage context. An output schema exists, so return values are documented elsewhere. Annotations complement behavioral aspects well.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has zero parameters with 100% schema coverage. The description correctly omits parameter details as none exist, so no additional parameter semantics are needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: retrieving a live macro stability assessment for DPX settlement infrastructure. It specifies the exact return fields (risk score, status, peg deviation, AI reasoning, recommendation) and data sources, distinguishing it from sibling tools like get_quote or get_settlement_status.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit guidance on when to use the tool's output: 'If UNSTABLE or peg deviation ≥ 50 bps, hold large settlements.' However, it does not mention when not to use this tool or suggest alternatives, though sibling tools are diverse.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_settlement_statusA
Read-onlyIdempotent
Inspect

Look up a previous DPX settlement by settlement ID. Returns the full audit record: status, tx hash, amounts, fees, oracle conditions at time of settlement, ESG score, Claude AI reasoning, and timestamp.

ParametersJSON Schema
NameRequiredDescriptionDefault
settlementIdYesSettlement ID from the settle tool (format: dpx_...)

Output Schema

ParametersJSON Schema
NameRequiredDescription
httpStatusNoHTTP status from Settlement Agent
settlementNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true, idempotentHint=true, destructiveHint=false, covering safety. The description adds value by listing the specific fields returned (status, tx hash, amounts, fees, oracle conditions, ESG score, AI reasoning, timestamp), providing a clearer understanding of the output beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence that states the purpose and lists the return contents. No unnecessary words, perfectly concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has a simple input schema, output schema exists, and annotations cover safety, the description provides sufficient context for an agent to understand what the tool does and what it returns. It could be slightly more explicit about when not to use it, but for a lookup tool, it is complete enough.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear description of settlementId. The tool description adds context that the ID comes from the settle tool and format 'dpx_...', which goes beyond the schema description. This helps the agent understand where to obtain the ID.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool looks up a DPX settlement by ID and returns a full audit record. It distinguishes itself from sibling tools like get_esg_score or get_fee_schedule by specifying it returns a comprehensive record including status, tx hash, amounts, fees, oracle conditions, ESG score, AI reasoning, and timestamp.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly mentions the input requirement (settlement ID from the settle tool) and implies it's for looking up previous settlements. However, it does not explicitly state when to use this tool over alternatives like get_esg_score or get_fee_schedule, though the context suggests this is the comprehensive audit record tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

settleA
Destructive
Inspect

Execute a DPX cross-border settlement. The Settlement Agent checks oracle conditions, reasons with Claude AI, and executes on-chain (or returns sandbox result if sandbox=true). Returns settlement ID, status (executed/held/sandbox/failed), tx hash, net amount, fees, oracle conditions, and AI reasoning. Default: sandbox=true — set sandbox=false only for live execution.

ParametersJSON Schema
NameRequiredDescriptionDefault
amountYesAmount in source currency units
purposeNoPayment purpose: intercompany, vendor-payment, payroll, treasury
quoteIdNoPre-fetched quoteId from get_quote (optional — agent fetches live if omitted)
sandboxNoSandbox mode — real calculations, no on-chain execution. Default: true.
esgScoreNoESG score override 0–100 (testing only)
referenceIdNoExternal reference ID (invoice number, TMS ID, etc.)
sourceCurrencyYesSource currency: USD, EUR, GBP, USDC, EURC
recipientAddressYesOn-chain recipient wallet address (0x...)
destinationCurrencyYesDestination currency: USD, EUR, GBP, USDC, EURC

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNo
summaryNoHuman-readable settlement outcome summary
httpStatusNoHTTP status from Settlement Agent
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate destructiveHint=true and readOnlyHint=false, aligning with the description's mention of on-chain execution. The description adds behavioral details: uses AI reasoning, checks oracle conditions, can be sandboxed. No contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single paragraph, front-loaded with main action, then process, return values, and usage instruction. Every sentence adds value; no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (oracle, AI, on-chain), the description covers key aspects: flow, return fields, sandbox warning, optional parameter behavior. Output fields are listed, offsetting need for output schema visibility.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage, so baseline is 3. The description adds value by explaining the sandbox default behavior and quoteId optionality, which are not fully captured in the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states that the tool executes a DPX cross-border settlement, detailing the process (check oracle conditions, reason with Claude AI, execute on-chain or sandbox). It lists returned fields, distinguishing it from siblings like get_quote or get_settlement_status.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit guidance: sandbox=true by default, only set false for live execution. Mentions quoteId is optional and agent can fetch live. While it doesn't explicitly compare to siblings, the context implies settlement execution.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

verify_feesA
Read-onlyIdempotent
Inspect

Verify that the off-chain fee quote matches what the on-chain DPXSettlementRouter contract will charge. Returns feesMatch (true/false). Call after get_quote and before settle to confirm fee integrity.

ParametersJSON Schema
NameRequiredDescriptionDefault
hasFxNoCross-currency settlement?
esgScoreNoESG score 0–100
amountUsdYesSettlement amount in USD

Output Schema

ParametersJSON Schema
NameRequiredDescription
deltaNoAbsolute difference in basis points
feesMatchNoTrue if off-chain quote matches on-chain contract
onChainFeeNo
offChainFeeNo
recommendationNoPROCEED | INVESTIGATE
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, idempotent, not destructive. The description adds that it returns a boolean and acts as an integrity check, which is useful context beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no wasted words. All information is front-loaded and relevant.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema and well-documented parameters, the description provides sufficient context for an agent to use the tool correctly, including its role in a workflow.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema descriptions cover all parameters (100%). The description adds high-level business context ('off-chain fee quote', 'on-chain DPXSettlementRouter contract') that the schema lacks.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the verb 'verify', the resource (fee quote vs on-chain charge), and the return value (feesMatch boolean). It distinguishes from siblings like get_quote and settle.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides clear ordering: 'Call after get_quote and before settle'. This tells when to use it. It doesn't explicitly say when not to, but the sequence is sufficient.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources