Skip to main content
Glama

Server Details

ISO20022Oracle - 12 ISO 20022 tools: pacs/pain/camt parsing, MX validation, MT migration.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
ToolOracle/iso20022oracle
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.6/5 across 12 of 12 tools scored. Lowest: 2.9/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose, ranging from address checks to compliance and message generation. No two tools have overlapping functionality.

Naming Consistency4/5

All names use lowercase with underscores, but there is a mix of verb-first and noun-first patterns. While readable, the inconsistency prevents a perfect score.

Tool Count5/5

With 12 tools covering core aspects of ISO 20022 and stablecoin compliance, the set is well-scoped without being overwhelming or too sparse.

Completeness4/5

The tools cover a broad range of functionalities from validation to compliance checks. A minor gap is the lack of generation for message types beyond pacs.008.

Available Tools

12 tools
check_structured_addressAInspect

Check if postal address data meets SWIFT/SEPA/CHAPS structured address requirements (Nov 2026 deadline). Provide address as object with StrtNm, BldgNb, PstCd, TwnNm, Ctry fields.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesAddress as object {StrtNm, BldgNb, PstCd, TwnNm, CtrySubDvsn, Ctry} or free-text string
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations, so description must fully disclose behavior. Only mentions input format and purpose. Lacks details on output (success/failure, error messages), side effects, or permissions. For a compliance check tool, more transparency is needed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose. No wasted words. Efficient and to the point.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Good for a simple check tool. Specifies input format and deadline. Could be more complete with output behavior or conditions for free-text input. No output schema, but not required for a boolean/status result.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage 100% with description for 'address'. Description adds value by listing expected fields (StrtNm, BldgNb, PstCd, TwnNm, Ctry), though slightly redundant with schema. Helps clarify structured object usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states verb 'check', resource 'postal address data', and context 'SWIFT/SEPA/CHAPS structured address requirements (Nov 2026 deadline)'. Distinguishes from siblings like travel_rule_check or validate_message.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implied usage for checking structured address compliance, but no explicit guidance on when to use vs alternatives (e.g., validate_message) or exclusions. Context is clear but can be improved.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dti_lookupAInspect

Look up Digital Token Identifier (ISO 24165/DTI) for stablecoins and crypto assets. Returns DTI code, ISIN mapping, networks, MiCA status, and how to reference in ISO 20022 messages.

ParametersJSON Schema
NameRequiredDescriptionDefault
symbolNoToken symbol (USDC, USDT, EURC, RLUSD, DAI, XRP, XLM, PYUSD)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It states 'Look up' and 'Returns', which implies a non-destructive read operation, but does not explicitly confirm safety, rate limits, or authentication needs. The description is adequate but lacks explicit behavioral disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with no fluff. Every sentence adds value: the first states the core action and scope, the second lists the return fields. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no output schema), the description covers main aspects: what it does, what it returns. However, it does not mention error handling, token not found, or if the response format is structured. Still, it is mostly complete for an agent to use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, with the single parameter 'symbol' already described in the schema (listing token symbols). The description does not add further meaning or context beyond what the schema provides, so it meets but does not exceed the baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool looks up Digital Token Identifiers (ISO 24165/DTI) for stablecoins and crypto assets, listing specific return values (DTI code, ISIN mapping, etc.). This separates it from sibling tools like stablecoin_iso_profile or mica_cross_check.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives. The description implies use when needing DTI information, but does not mention when not to use it or suggest other tools for related tasks.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_pacs008CInspect

Generate a pacs.008 (FI to FI Customer Credit Transfer) ISO 20022 XML message skeleton. Optionally include stablecoin settlement reference with DTI.

ParametersJSON Schema
NameRequiredDescriptionDefault
amountNo
currencyNo
debtor_bicNo
stablecoinNoOptional: USDC, RLUSD, EURC etc. for on-chain settlement ref
debtor_nameNo
creditor_bicNo
creditor_nameNo
settlement_methodNoCLRG (default), INGA (on-chain), INDA
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It discloses that a 'skeleton' is generated and that stablecoin settlement reference is optional. However, it does not reveal potential side effects, required permissions, or that the output may need further processing. The description is minimally adequate but lacks depth.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, no redundant information, and front-loads the key action. It is concise but could be slightly more informative without losing brevity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 8 parameters and no output schema, the description is too sparse. It does not explain the purpose of the pacs.008 message, what 'skeleton' entails, or the significance of DTI. The limited information risks incorrect agent invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is only 25%, meaning most parameters lack descriptions in the schema. The description does not compensate: it mentions parameters like amount, currency, etc., but does not explain their roles, formats, or constraints beyond 'optionally include stablecoin settlement reference'. This leaves ambiguity for agents.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it generates a pacs.008 ISO 20022 XML message skeleton and optionally includes stablecoin settlement reference with DTI. This is specific and uses a verb-resource structure. However, it does not explicitly differentiate from sibling tools, though the uniqueness of the task is implied.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description does not mention prerequisites, context, or exclusion criteria. Sibling tools like 'validate_message' or 'stablecoin_iso_profile' are not referenced or contrasted.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

health_checkBInspect

ISO20022Oracle health and status check.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description must cover behavioral traits. It only states 'health and status check' but doesn't disclose whether it's read-only, what it returns, or any side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single concise sentence is front-loaded but could include more context. Earns its place for a simple health check.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a zero-parameter tool, but missing information about output or behavior. Minimal completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist; schema coverage is 100%. Description adds no param info but also requires none. Baseline for zero parameters is 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it checks health and status of ISO20022Oracle, distinguishing it from sibling tools that perform specific operations like dti_lookup or generate_pacs008.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. Does not mention context or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

message_catalogAInspect

List all ISO 20022 message types relevant to stablecoin/crypto payments with required fields and stablecoin-specific extensions.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainNoFilter by domain: payments, cash_management, payments_initiation
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description must disclose all behavioral traits. It indicates that the tool lists message types with fields and extensions, but it does not mention whether the operation is read-only, requires authentication, or has any rate limits. The description is adequate but lacks details on default behavior (e.g., what happens without the domain filter).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that concisely conveys the purpose and additional details. It is not overly verbose, though it could be slightly more structured with separate clauses. Overall, it is efficient and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple list tool with one optional parameter and no output schema, the description covers the main functionality. However, it lacks information about the output format or structure of the returned data, which would be important for an agent. The description is adequate but could be more complete by hinting at the response shape.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for the single parameter 'domain', already explaining its filter role and allowed values. The tool description adds context about stablecoin-specific extensions, but the parameter semantics are largely captured by the schema. Thus, the description provides minimal additional value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action 'list' and the resource 'ISO 20022 message types' with a specific scope 'relevant to stablecoin/crypto payments' and mentions additional details like required fields and extensions. It distinguishes itself from siblings like 'validate_message' and 'generate_pacs008' which are for validation and generation, not listing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for browsing message types in the stablecoin domain but does not explicitly state when to use this tool over alternatives like 'stablecoin_iso_profile' or 'validate_message'. No direct guidance on prerequisites or when not to use it is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

message_convert_checkAInspect

Check MT→MX migration status for a SWIFT legacy message type. Returns ISO 20022 equivalent, complexity, deadlines, and stablecoin advantages.

ParametersJSON Schema
NameRequiredDescriptionDefault
mt_typeYesLegacy SWIFT message type (MT103, MT202, MT940, MT950, etc.)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses the behavioral output (returns ISO equivalent, complexity, deadlines, stablecoin advantages) and implies a read-only check. However, it does not mention potential errors or prerequisites, though for a simple check this is minor.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently communicates the action, input, and output. No unnecessary words; front-loaded with the purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no output schema, no annotations), the description is largely complete. It explains the tool's function and return items. Missing details about error handling or validation, but these are not critical for a low-complexity tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description for mt_type is clear and provides example values. The tool description adds context that it checks migration status for that type, but the parameter itself is straightforward. With 100% schema coverage, baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool checks MT→MX migration status for a specific SWIFT legacy message type, and lists the returned information (ISO 20022 equivalent, complexity, deadlines, stablecoin advantages). It distinguishes itself from sibling tools like generate_pacs008 or message_catalog.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when checking migration status for an MT type but does not explicitly state when to use it over alternatives, nor does it mention when not to use it or provide exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

mica_cross_checkAInspect

Cross-check: Can this stablecoin be legally used in EU ISO 20022 payment flows after MiCA enforcement? Returns PASS/WARN/BLOCK verdict with reasoning.

ParametersJSON Schema
NameRequiredDescriptionDefault
symbolYesStablecoin symbol to check
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully disclose behavioral traits. It only describes the output format and does not reveal whether the tool is a read-only check, makes external calls, or has rate limits. The behavior (e.g., querying a database or API) is not described, leaving significant gaps for an AI agent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loading the purpose and then the output. Every word is functional, with no extraneous content. It is highly efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description explains the verdict types but lacks detail on the reasoning format, error handling, or behavior for unknown symbols. Since there is no output schema, the description should be more explicit to fully prepare an agent for handling responses.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage (the 'symbol' parameter is described as 'Stablecoin symbol to check'). The description adds no additional parameter-level details, such as allowed formats or example values. Baseline expectations are met but not exceeded.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's function: cross-checking a stablecoin's legal usability in EU ISO 20022 payment flows after MiCA enforcement. It specifies the output format (PASS/WARN/BLOCK with reasoning), and the verb 'cross-check' and resource 'stablecoin' are specific. It effectively distinguishes from sibling tools like 'stablecoin_iso_profile' and 'sanctions_screen_iso'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use: when evaluating a stablecoin for EU payment flow compliance. However, it lacks explicit when-not-to-use guidance or alternatives. With several related sibling tools, such exclusions would enhance clarity, but the context is clear enough.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sanctions_screen_isoCInspect

Screen originator/beneficiary data from ISO 20022 messages against sanctions lists (EU, OFAC, UN). Cross-references with FeedOracle AMLOracle.

ParametersJSON Schema
NameRequiredDescriptionDefault
bicNo
nameNo
countryNoISO 3166-1 alpha-2
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden for behavioral disclosure. It only mentions cross-referencing, but fails to state whether the tool is read-only, destructive, requires authentication, or what happens to the data. This is insufficient for safe agent invocation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no redundancy, purpose front-loaded. Efficient use of space, though a bullet or structured format could improve readability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Missing output schema, no annotations, and minimal parameter descriptions. The tool's return type, error conditions, and behavioral constraints are absent. For a compliance tool, agents need to know what 'screened' means and how to interpret results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 3 parameters with only 33% description coverage (country has a format hint). The description adds no meaning to 'bic' or 'name' beyond their labels, and does not clarify acceptable formats or required combinations. With low schema coverage, the description should compensate but does not.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Screen... data against sanctions lists'), specifies the data source ('ISO 20022 messages'), and lists the applied lists ('EU, OFAC, UN'). It also mentions a cross-reference function ('Cross-references with FeedOracle AMLOracle'), distinguishing it from generic screening tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus siblings like 'travel_rule_check' or 'check_structured_address'. The description implies it is for ISO 20022 messages, but does not state prerequisites, exclusions, or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

stablecoin_iso_profileAInspect

Get complete ISO 20022 integration profile for a stablecoin: DTI, message references, settlement networks, MiCA status, XRPL details, live peg data.

ParametersJSON Schema
NameRequiredDescriptionDefault
symbolYesToken symbol (USDC, RLUSD, EURC, USDT, DAI, PYUSD)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavior. It only lists output components without clarifying read-only nature, error handling, or required permissions. The verb 'Get' implies a safe lookup, but not explicitly stated.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that packs in key information, though abbreviations like DTI and MiCA might require additional context. It is concise and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple parameter (1 required), no output schema, and no annotations, the description adequately lists the return data. However, it lacks error handling details or usage context. Still, it is reasonably complete for a lookup tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% for the single parameter 'symbol', listing accepted tokens. The description does not add semantic meaning to the parameter; it only explains what will be returned. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and the resource 'complete ISO 20022 integration profile for a stablecoin', listing specific data points like DTI, message references, settlement networks, MiCA status, XRPL details, live peg data. This distinguishes it from sibling tools such as 'dti_lookup' or 'mica_cross_check' by being comprehensive.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for obtaining a complete profile, but does not explicitly state when to use this tool over alternatives like 'dti_lookup' for just DTI or 'mica_cross_check' for MiCA status. No exclusions or context for when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

swift_deadlinesAInspect

Get all SWIFT/ISO 20022 compliance deadlines with urgency levels, days remaining, and recommendations. Covers MT retirement, structured addresses, MiCA enforcement.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description bears full responsibility for behavioral disclosure. It only describes output content and omits aspects like data freshness, authentication requirements, or side effects. Without this, an agent cannot assess safety or constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys the tool's purpose, coverage, and output components without redundancy or unnecessary detail.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (no parameters, no output schema), the description covers essential aspects: what data is returned and its key categories. Minor omissions like sorting or filtering capability are acceptable for a straightforward list tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters, so according to guidelines the baseline is 4. The description adds value by explaining what the tool returns, which surpasses the empty schema, but no parameter details are needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly specifies the tool's purpose: retrieving SWIFT/ISO 20022 compliance deadlines with urgency levels, days remaining, and recommendations. It lists specific coverages (MT retirement, structured addresses, MiCA enforcement), distinguishing it from sibling tools that focus on individual compliance aspects.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit usage guidelines are provided; the tool description does not state when to use it versus alternatives like check_structured_address or mica_cross_check. While the coverage hints at general deadline overviews, the lack of direct guidance leaves ambiguity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

travel_rule_checkAInspect

Check FATF Travel Rule compliance for a payment. Validates originator/beneficiary data requirements against ISO 20022 field mappings.

ParametersJSON Schema
NameRequiredDescriptionDefault
amountYes
currencyNo
originatorNoObject with name, address, account, bic
beneficiaryNoObject with name, account, bic
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden for behavioral disclosure. It states the tool 'validates', suggesting a read-only check, but does not detail any side effects, permissions, or error behavior. This is adequate but not thorough.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence of 18 words, efficient and front-loaded with key information. No wasted words, but could be slightly more structured with a separate sentence for additional context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 4 parameters including nested objects and no output schema, the description is minimal. It misses details on return format, error states, and what 'validates' entails. Adequate for simple tools but not fully complete for this complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 50%, partially describing originator and beneficiary. The description adds value by linking parameters to 'ISO 20022 field mappings', but does not explain amount and currency semantics beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses the verb 'Check' and specifies the resource 'FATF Travel Rule compliance for a payment', clearly stating the action and subject. It also mentions 'ISO 20022 field mappings', which distinguishes it from other validation tools among siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives like sanctions_screen_iso or validate_message. The description only implies usage for travel rule compliance but does not provide context or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

validate_messageAInspect

Validate an ISO 20022 XML message (pacs.008, camt.053, pain.001, etc.) for schema compliance, required fields, and Nov 2026 structured address deadline. Provide 'xml' for full validation or 'message_type' for schema info.

ParametersJSON Schema
NameRequiredDescriptionDefault
xmlNoRaw ISO 20022 XML message to validate
message_typeNoMessage type for schema info (e.g. pacs.008, camt.053, pain.001)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that the tool checks schema compliance, required fields, and a specific deadline (Nov 2026 structured address), which is transparent for a validation tool. However, it does not mention error handling, performance, or authentication needs, but given the lack of annotations, the description is adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with only two sentences. The first sentence covers the core purpose and key checks, while the second clarifies parameter usage. No unnecessary words or repetition.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has no output schema, so the description should explain what the tool returns (e.g., validation results, errors). It omits any mention of output, which is a significant gap for a validation tool with no structured output definition.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% for both parameters, but the description adds significant meaning: it clarifies that 'xml' triggers full validation and 'message_type' returns schema info. This goes beyond the schema's basic descriptions for each parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool validates ISO 20022 XML messages for schema compliance, required fields, and a specific deadline (Nov 2026 structured address). It lists common message types and distinguishes between full validation and schema info modes, providing a specific verb and resource that differentiates it from siblings like check_structured_address or generate_pacs008.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly tells the agent to provide 'xml' for full validation or 'message_type' for schema info, giving clear context for parameter usage. However, it does not compare this tool to alternatives or state when not to use it, missing explicit exclusion guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.