DPX — Institutional Cross-Border Settlement
Server Details
AI-native stablecoin settlement rail replacing SWIFT for institutional cross-border payments. 14 tools covering settlement quotes, execution, ESG scoring, oracle status, fee verification, competitor comparison, rail health, investment context, and MPP-gated macro intelligence. Settles via Base mainnet USDC at ~1.385% all-in.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.4/5 across 14 of 14 tools scored.
Each tool has a clearly distinct purpose. For example, 'get_quote' and 'settle' are sequential and non-overlapping; 'get_analytics', 'get_oracle_status', 'get_rail_status', and 'get_reliability' each target a different aspect of protocol health. No two tools could be easily confused.
All tool names follow a consistent verb_noun pattern with underscores. The majority use 'get_' prefix (10 of 14), and the remaining use verbs like 'compare', 'verify', and 'settle'. The naming is predictable and intuitive.
14 tools is appropriate for the domain of institutional cross-border settlement. The set covers manifest, quoting, settlement, fee verification, status lookup, analytics, oracle, rails, reliability, ESG, competitor comparison, and investment context without being excessive.
The tool surface thoroughly covers the settlement lifecycle: capability discovery (get_manifest), pricing (get_quote), integrity check (verify_fees), execution (settle), and audit (get_settlement_status). Additionally, it includes monitoring tools (get_analytics, get_oracle_status, get_rail_status, get_reliability) and supporting tools (get_esg_score, compare_to_competitors, get_intelligence, get_investment_context). No obvious gaps for the stated purpose.
Available Tools
11 toolscompare_to_competitorsARead-onlyInspect
Compare DPX settlement cost against Stripe cross-border (5.4% + $0.30), Wise (0.40–1.50%), Ripple ODL (0.20–0.50%), Lightspark, SWIFT (2.00–5.00%), PayPal, and bank wire. Returns dollar savings vs each at the current DPX all-in rate (1.385% typical). Also returns GENIUS Act and MiCA compliance status for each competitor.
| Name | Required | Description | Default |
|---|---|---|---|
| hasFx | No | Cross-currency? Adds 0.40% FX fee. | |
| esgScore | No | ESG score 0–100 | |
| amountUsd | Yes | Settlement amount in USD |
Output Schema
| Name | Required | Description |
|---|---|---|
| dpx | No | |
| note | No | Context note on comparison methodology |
| amountUsd | No | Settlement amount compared |
| comparison | No | Per-competitor comparison keyed by competitor ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true and destructiveHint=false, so the description adds context by specifying the output: dollar savings and compliance status. However, it does not disclose potential data freshness, rate limits, or external dependencies, which would enhance transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, front-loaded with the main action (compare against competitors), followed by what is returned. No superfluous words; every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of comparing against 7+ competitors and returning savings and compliance, the description covers the key inputs (no specific mention but schema handles) and outputs explicitly. Output schema exists, but description adequately explains return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptive parameter fields (e.g., hasFx with 'Cross-currency? Adds 0.40% FX fee.'). The description does not add meaning beyond the schema, so a baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool compares DPX settlement costs against a specific list of competitors, with a detailed verb 'compare' and resource 'DPX settlement cost against competitors'. It distinguishes itself from sibling tools like get_fee_schedule and get_quote by focusing on competitive comparison rather than just retrieving fees.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for cost comparison but does not explicitly state when to use this tool versus alternatives like get_fee_schedule. No exclusion criteria or when-not-to-use guidance is provided, leaving the agent to infer context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_analyticsARead-onlyInspect
Get live DPX performance analytics. Returns current stability score, ESG composite scores, live fee breakdown, oracle health across all data sources, and a settlement readiness assessment. Use for dashboards, reporting, and AI-driven monitoring of protocol health.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| fees | No | |
| esgScore | No | Protocol ESG composite score 0–100 |
| timestamp | No | ISO 8601 analytics timestamp |
| oracleHealth | No | Health status per oracle data source |
| stabilityScore | No | Current oracle stability score 0–100 |
| settlementReady | No | True if conditions are suitable for settlement |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false. Description adds value by clarifying that data is 'live' and 'current'. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: first describes function and output, second describes usage. No unnecessary words. Front-loaded with purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no parameters and an output schema (present), the description covers purpose, returned data, and use cases completely. No gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters, so no parameter documentation needed. Baseline 4 applies as schema coverage is 100% and description adds no redundancy.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Get live DPX performance analytics' with a specific verb and resource. It lists the exact data points returned (stability score, ESG composites, fee breakdown, oracle health, settlement readiness). It distinguishes from siblings like get_esg_score or get_fee_schedule which are more specific.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states use cases: 'dashboards, reporting, and AI-driven monitoring of protocol health.' While it doesn't explicitly contrast with siblings, the context implies this is the comprehensive analytics tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_esg_scoreARead-onlyIdempotentInspect
Get the live counterparty risk score (ESG-denominated) for a wallet address or the protocol default. Returns Environmental, Social, and Governance risk scores (0–100 each), composite weighted average, and the compliance-adjusted settlement fee percentage this score produces. Updated hourly from 6 institutional data sources: WorldBank, IMF, OECD, UN SDG API, ClimateMonitor, and SEC EDGAR. Required by EU SFDR Principal Adverse Impact reporting and CSRD financed emissions disclosure for institutional clients.
| Name | Required | Description | Default |
|---|---|---|---|
| address | No | Wallet address (0x...) to score. Omit for protocol default. |
Output Schema
| Name | Required | Description |
|---|---|---|
| tier | No | ESG tier label |
| feePct | No | ESG fee percentage applied at settlement |
| social | No | Social score 0–100 |
| address | No | Scored wallet address or "default" |
| sources | No | Data sources used |
| esgScore | No | Composite ESG score 0–100 |
| updatedAt | No | ISO 8601 last update timestamp |
| governance | No | Governance score 0–100 |
| environmental | No | Environmental score 0–100 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnly, idempotent, non-destructive. Description adds detailed behavioral traits: returns specific risk scores (0–100 range), composite weighted average, settlement fee percentage, update frequency (hourly), and data sources from 6 institutional sources.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is 4 sentences, front-loaded with main purpose, every sentence adds unique value (return details, update frequency, data sources, regulatory relevance). No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With output schema present, return values need no elaboration. Description covers purpose, parameter usage, data freshness, regulatory context, and data source authority. Complete for a read-only information tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers 100% of parameters with description for 'address'. The tool description adds value by explaining that omitting the address results in the protocol default, which improves clarity beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Get' and resource 'counterparty risk score (ESG-denominated)', clearly states it scores wallet addresses or protocol default, and distinguishes from sibling tools like compare_to_competitors and get_analytics by focusing on ESG risk.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description specifies use cases (EU SFDR, CSRD reporting, institutional clients) and how to use (omit address for default). It lacks explicit when-not-to-use or alternatives among siblings, but the context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_fee_scheduleARead-onlyIdempotentInspect
Get the complete DPX fee schedule: all components (core/FX/ESG/license), volume discount tiers (Standard / Growth / Institutional / Sovereign), ESG fee table by score, scenario examples, and competitive benchmarks vs Stripe, Wise, SWIFT, and bank wire.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| fees | No | Fee component definitions |
| tiers | No | Volume discount tiers |
| examples | No | Fee calculation examples |
| benchmarks | No | Competitor fee benchmarks |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, so the description adds valuable context about the rich output contents (components, tiers, benchmarks), enhancing transparency beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single, well-structured sentence efficiently conveys the full scope of the tool's output without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the output schema exists, the description is complete for a parameterless read-only tool, covering all key aspects of the fee schedule.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Tool has zero parameters and schema coverage is 100%. Description does not need to explain parameters; it adds value by detailing output content.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states it retrieves the complete DPX fee schedule and lists all included components, clearly distinguishing it from sibling tools like compare_to_competitors or verify_fees.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the purpose is clear, the description lacks explicit guidance on when to use this tool versus alternatives. No mention of exclusions or context for choosing among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_manifestARead-onlyIdempotentInspect
Get the DPX protocol manifest. Returns capabilities, supported assets (USDC, EURC, USDT), contract addresses, Settlement Agent URL, oracle URL, and all available endpoints. Call this first to understand what DPX can do.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| agent | No | Settlement Agent manifest: name, version, status |
| oracle | No | Oracle manifest: name, version, assets, endpoints |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, openWorldHint, idempotentHint, and destructiveHint. The description adds value by specifying the exact content returned (capabilities, supported assets, contract addresses, etc.), which goes beyond the annotation hints and provides concrete behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, both front-loaded with purpose. The first sentence states the action and result, the second provides additional context. Every word is necessary and nothing is extraneous.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (no parameters, rich annotations, output schema present), the description is fully complete. It tells the agent what the tool does, what it returns, and suggests the usage order. No additional information is needed for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so schema description coverage is 100%. The description is not required to add parameter information, and it does not. Per guidelines, 0 parameters yields a baseline of 4, which is appropriate here.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get the DPX protocol manifest' and explains what it returns (capabilities, assets, endpoints). It also distinguishes itself by saying 'Call this first to understand what DPX can do,' setting it apart from sibling tools that likely perform more specific operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Call this first to understand what DPX can do,' which provides clear usage guidance. However, it does not explicitly state when not to use this tool or mention alternatives, though the context of being an initial discovery tool makes that less critical.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_oracle_statusARead-onlyIdempotentInspect
Get full output from the latest DPX Stability Oracle run. Includes all 9 signal layers: climate, commodities, macro, FX, basket peg, yield curve, infrastructure, war/geopolitical risk, and USD structural health. Includes AI intelligence briefing.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| tier | No | Oracle tier classification |
| score | No | Composite oracle score 0–100 |
| alerts | No | Active oracle alerts |
| status | No | STABLE | CAUTION | UNSTABLE |
| signals | No | Individual signal scores for all 9 oracle layers |
| briefing | No | AI intelligence briefing text |
| timestamp | No | ISO 8601 oracle run timestamp |
| chaosRegime | No | True if extreme market conditions detected |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare the tool read-only, idempotent, and non-destructive. The description adds behavioral context by listing the 9 signal layers and AI briefing, going beyond annotations. It does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero wasted words. The first sentence states the main action, and the second elaborates on the content. Perfectly front-loaded and concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters, strong annotations, and an existing output schema (not provided but exists), the description is complete enough. It lacks mention of caching or staleness, but openWorldHint suggests dynamic data. Overall, sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are no parameters, and schema coverage is 100%. The description adds no parameter info because none is needed. Baseline for zero parameters is 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's action: 'Get full output from the latest DPX Stability Oracle run.' It specifies the 9 signal layers and AI briefing, distinguishing it from sibling tools like 'get_quote' or 'get_fee_schedule' which serve different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when the latest full oracle output is needed, but it does not explicitly state when to use this tool versus alternatives, nor provide any exclusions or prerequisites. Given the number of siblings, explicit guidance would be beneficial.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_quoteARead-onlyInspect
Get a binding fee quote for a DPX settlement. Returns core fee (0.85%), FX fee (0.40% cross-currency), live ESG fee (0–0.50%), license fee (0.01%), total all-in rate, net amount, oracle status, AI reasoning, and a quoteId valid for 300 seconds. Always call this before settle.
| Name | Required | Description | Default |
|---|---|---|---|
| hasFx | No | True if source and destination currencies differ (adds 0.40% FX fee) | |
| esgScore | No | Counterparty risk score 0–100 (ESG-denominated). Required for EU SFDR/CSRD compliance. Score 75 = 0.125% compliance-adjusted fee. Score 100 = 0% fee. Obtain from get_esg_score tool. | |
| amountUsd | Yes | Settlement amount in USD | |
| monthlyVolumeUsd | No | Monthly volume for discount tier. $1M+ = Institutional (20% off). $10M+ = Sovereign (30% off). |
Output Schema
| Name | Required | Description |
|---|---|---|
| fees | No | |
| tier | No | Volume tier: Standard | Growth | Institutional | Sovereign |
| quoteId | No | Binding quote ID, valid 300 seconds |
| amountUsd | No | Input settlement amount in USD |
| expiresAt | No | ISO 8601 expiry timestamp |
| reasoning | No | AI reasoning for fee calculation |
| oracleScore | No | Oracle confidence 0–100 |
| netAmountUsd | No | Net amount after all fees |
| oracleStatus | No | STABLE | CAUTION | UNSTABLE |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses key behavioral traits not captured in annotations: the quote is binding and valid for 300 seconds. It also enumerates fee components and states it must be called before settlement. Annotations indicate readOnly=true, which is consistent, and no contradictions exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that front-loads the core purpose, then lists return fields and a key behavioral note. Every sentence serves a purpose with no redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema and rich input schema descriptions, the description covers the essential behavioral note (300s validity) and precondition (call before settle). It could be improved by noting that the quote is a simulation until settlement is confirmed, but overall it is fairly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with each parameter having detailed descriptions in the schema itself (e.g., esgScore usage, monthlyVolumeUsd tiers). The tool description adds no additional parameter semantics beyond the schema, so the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get a binding fee quote for a DPX settlement.' It specifies the resource and action, and lists return components. However, it does not explicitly differentiate from siblings like 'get_fee_schedule' or 'verify_fees', which is a minor gap.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes an implicit usage directive: 'Always call this before settle.' It also references the 'get_esg_score' tool in the parameter descriptions. However, no explicit when-not-to-use or alternative tools are mentioned, leaving room for ambiguity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_reliabilityARead-onlyIdempotentInspect
Get live macro stability assessment for DPX settlement infrastructure. Returns institutional risk score (0–100), status (STABLE/CAUTION/UNSTABLE), peg deviation in basis points, AI reasoning, and PROCEED/CAUTION/HOLD recommendation. Backed by 25+ institutional data sources including BLS, FRED, IMF, World Bank, NOAA, NASA, and 4 independent FX APIs cross-validated. If UNSTABLE or peg deviation ≥ 50 bps, hold large settlements.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| status | No | Current stability status |
| outlook | No | Short-term stability outlook |
| reasoning | No | AI reasoning for current status |
| timestamp | No | ISO 8601 assessment timestamp |
| pegDeviation | No | USDC peg deviation in basis points |
| recommendation | No | PROCEED | CAUTION | HOLD |
| stabilityScore | No | Oracle stability score 0–100 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false. The description adds value by elaborating on the output data and data sources, and provides an actionable recommendation, without contradicting the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences: first states purpose, second details outputs and data sources, third provides actionable guidance. No redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no parameters, the description fully covers the tool's functionality, outputs, and usage context. An output schema exists, so return values are documented elsewhere. Annotations complement behavioral aspects well.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters with 100% schema coverage. The description correctly omits parameter details as none exist, so no additional parameter semantics are needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: retrieving a live macro stability assessment for DPX settlement infrastructure. It specifies the exact return fields (risk score, status, peg deviation, AI reasoning, recommendation) and data sources, distinguishing it from sibling tools like get_quote or get_settlement_status.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit guidance on when to use the tool's output: 'If UNSTABLE or peg deviation ≥ 50 bps, hold large settlements.' However, it does not mention when not to use this tool or suggest alternatives, though sibling tools are diverse.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_settlement_statusARead-onlyIdempotentInspect
Look up a previous DPX settlement by settlement ID. Returns the full audit record: status, tx hash, amounts, fees, oracle conditions at time of settlement, ESG score, Claude AI reasoning, and timestamp.
| Name | Required | Description | Default |
|---|---|---|---|
| settlementId | Yes | Settlement ID from the settle tool (format: dpx_...) |
Output Schema
| Name | Required | Description |
|---|---|---|
| httpStatus | No | HTTP status from Settlement Agent |
| settlement | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true, idempotentHint=true, destructiveHint=false, covering safety. The description adds value by listing the specific fields returned (status, tx hash, amounts, fees, oracle conditions, ESG score, AI reasoning, timestamp), providing a clearer understanding of the output beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence that states the purpose and lists the return contents. No unnecessary words, perfectly concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has a simple input schema, output schema exists, and annotations cover safety, the description provides sufficient context for an agent to understand what the tool does and what it returns. It could be slightly more explicit about when not to use it, but for a lookup tool, it is complete enough.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a clear description of settlementId. The tool description adds context that the ID comes from the settle tool and format 'dpx_...', which goes beyond the schema description. This helps the agent understand where to obtain the ID.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool looks up a DPX settlement by ID and returns a full audit record. It distinguishes itself from sibling tools like get_esg_score or get_fee_schedule by specifying it returns a comprehensive record including status, tx hash, amounts, fees, oracle conditions, ESG score, AI reasoning, and timestamp.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly mentions the input requirement (settlement ID from the settle tool) and implies it's for looking up previous settlements. However, it does not explicitly state when to use this tool over alternatives like get_esg_score or get_fee_schedule, though the context suggests this is the comprehensive audit record tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
settleADestructiveInspect
Execute a DPX cross-border settlement. The Settlement Agent checks oracle conditions, reasons with Claude AI, and executes on-chain (or returns sandbox result if sandbox=true). Returns settlement ID, status (executed/held/sandbox/failed), tx hash, net amount, fees, oracle conditions, and AI reasoning. Default: sandbox=true — set sandbox=false only for live execution.
| Name | Required | Description | Default |
|---|---|---|---|
| amount | Yes | Amount in source currency units | |
| purpose | No | Payment purpose: intercompany, vendor-payment, payroll, treasury | |
| quoteId | No | Pre-fetched quoteId from get_quote (optional — agent fetches live if omitted) | |
| sandbox | No | Sandbox mode — real calculations, no on-chain execution. Default: true. | |
| esgScore | No | ESG score override 0–100 (testing only) | |
| referenceId | No | External reference ID (invoice number, TMS ID, etc.) | |
| sourceCurrency | Yes | Source currency: USD, EUR, GBP, USDC, EURC | |
| recipientAddress | Yes | On-chain recipient wallet address (0x...) | |
| destinationCurrency | Yes | Destination currency: USD, EUR, GBP, USDC, EURC |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | No | |
| summary | No | Human-readable settlement outcome summary |
| httpStatus | No | HTTP status from Settlement Agent |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true and readOnlyHint=false, aligning with the description's mention of on-chain execution. The description adds behavioral details: uses AI reasoning, checks oracle conditions, can be sandboxed. No contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single paragraph, front-loaded with main action, then process, return values, and usage instruction. Every sentence adds value; no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (oracle, AI, on-chain), the description covers key aspects: flow, return fields, sandbox warning, optional parameter behavior. Output fields are listed, offsetting need for output schema visibility.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage, so baseline is 3. The description adds value by explaining the sandbox default behavior and quoteId optionality, which are not fully captured in the schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states that the tool executes a DPX cross-border settlement, detailing the process (check oracle conditions, reason with Claude AI, execute on-chain or sandbox). It lists returned fields, distinguishing it from siblings like get_quote or get_settlement_status.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit guidance: sandbox=true by default, only set false for live execution. Mentions quoteId is optional and agent can fetch live. While it doesn't explicitly compare to siblings, the context implies settlement execution.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
verify_feesARead-onlyIdempotentInspect
Verify that the off-chain fee quote matches what the on-chain DPXSettlementRouter contract will charge. Returns feesMatch (true/false). Call after get_quote and before settle to confirm fee integrity.
| Name | Required | Description | Default |
|---|---|---|---|
| hasFx | No | Cross-currency settlement? | |
| esgScore | No | ESG score 0–100 | |
| amountUsd | Yes | Settlement amount in USD |
Output Schema
| Name | Required | Description |
|---|---|---|
| delta | No | Absolute difference in basis points |
| feesMatch | No | True if off-chain quote matches on-chain contract |
| onChainFee | No | |
| offChainFee | No | |
| recommendation | No | PROCEED | INVESTIGATE |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, idempotent, not destructive. The description adds that it returns a boolean and acts as an integrity check, which is useful context beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no wasted words. All information is front-loaded and relevant.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema and well-documented parameters, the description provides sufficient context for an agent to use the tool correctly, including its role in a workflow.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema descriptions cover all parameters (100%). The description adds high-level business context ('off-chain fee quote', 'on-chain DPXSettlementRouter contract') that the schema lacks.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the verb 'verify', the resource (fee quote vs on-chain charge), and the return value (feesMatch boolean). It distinguishes from siblings like get_quote and settle.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It provides clear ordering: 'Call after get_quote and before settle'. This tells when to use it. It doesn't explicitly say when not to, but the sequence is sufficient.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!