Skip to main content
Glama

SENTINEL Compliance Intelligence

Server Details

AML/CFT compliance oracle for the agent economy. Wallet screening against 12,997+ sanctioned crypto addresses, 1.1M+ entity search across OFAC/UN/EU sanctions, PEPs, Interpol, World Bank. 179-country jurisdiction risk scoring. Travel rule compliance. ERC-8004 Agent #27961 on Base.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.8/5 across 22 of 22 tools scored. Lowest: 2.9/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes (e.g., wallet screening vs. entity screening vs. jurisdiction risk). Some overlap exists between composite tools like due_diligence and individual checks, but descriptions clarify the differences. Overall, an agent can distinguish tools with high confidence.

Naming Consistency4/5

Tool names use a consistent snake_case convention with descriptive compound names. Most follow a noun_pair or verb_noun pattern (e.g., compliance_wallet, transaction_screen). Minor deviations like due_diligence and facilitator_kya are still readable.

Tool Count4/5

With 22 tools, the count is on the higher side but appropriate for a comprehensive compliance intelligence platform covering screening, monitoring, economic data, and facilitator capabilities. Each tool serves a clear purpose within the domain.

Completeness4/5

The tool surface covers major compliance workflows: individual screening, compound due diligence, transaction pre-screening, travel rule, continuous monitoring, and country intelligence. Minor gaps exist (e.g., no bulk screening or audit log), but core needs are addressed.

Available Tools

22 tools
compliance_jurisdiction_riskBInspect

Get composite risk score for any of 179 countries — FATF grey/blacklist, CPI, Basel AML Index. Costs $0.001 USDC via x402.

ParametersJSON Schema
NameRequiredDescriptionDefault
country_codeYesISO 3166-1 alpha-2 country code (e.g. MU, US, RU, IR)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description mentions a cost of $0.001 USDC via x402, which is useful behavioral info. However, with no annotations provided, it fails to disclose other critical traits such as rate limits, idempotency, or response format, leaving significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences deliver essential purpose and cost information with no filler. The structure is front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers purpose and cost but omits return format, error handling, and response structure. Given the lack of an output schema, additional context would improve usability.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The single parameter (country_code) is already fully described in the input schema with examples and format. The tool description adds no additional semantic value, so a baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool retrieves a composite risk score for 179 countries, specifying the data sources (FATF grey/blacklist, CPI, Basel AML Index). This distinguishes it from sibling tools like compliance_mauritius (single country) and compliance_watchlist (entity-level), making the purpose unmistakable.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is given on when to use this tool versus alternatives. It does not mention context, prerequisites, or exclusions, leaving the agent to infer usage from the name and description alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compliance_mauritiusAInspect

Search Mauritius FSC registry + ICIJ offshore leak connections. Costs $0.005 USDC via x402.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesEntity or person name to search
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must cover behavioral traits. It does disclose the cost ($0.005 USDC via x402), which is important. However, it omits other details like rate limits, authentication, or return value behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence with no wasted words. It efficiently conveys the tool's purpose and a critical behavioral detail (cost).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, the description is adequate but lacks context on expected output format, error handling, or response structure, especially given the paid nature.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear description for 'query'. The tool description does not add further meaning beyond what the schema already provides, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action: 'Search Mauritius FSC registry + ICIJ offshore leak connections.' It uses a specific verb and resource, and distinguishes itself from sibling compliance tools by focusing on Mauritius and offshore leak connections.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for searching entity/person names in Mauritius, but provides no explicit guidance on when to use this tool versus alternatives like compliance_jurisdiction_risk or country_brief.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compliance_walletAInspect

Screen a blockchain wallet address against sanctioned/blacklisted crypto addresses (OFAC SDN, USDT Blacklist, USDC Blacklist, Ransomwhere, OpenSanctions, UK OFSI). Costs $0.003 USDC via x402.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoChain hint: btc, eth, trx, auto (default: auto)
addressYesBlockchain wallet address (any chain — BTC, ETH, TRX, etc.)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses cost ($0.003 USDC) and payment method (x402), and lists specific blacklists. Does not mention rate limits or auth requirements, but given no annotations, this is reasonably transparent for a screening tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no fluff. First sentence captures purpose, second adds cost info. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema and description does not explain what the tool returns (e.g., boolean, list of matches). This omission limits completeness for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage; description does not add extra parameter meaning beyond what schema provides. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states verb 'screen', resource 'blockchain wallet address', and specifies the lists (OFAC SDN, USDT Blacklist, etc.). Distinguishes from sibling compliance tools that focus on jurisdiction, entity, or watchlist.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when or when not to use this tool versus alternatives like compliance_jurisdiction_risk or compliance_watchlist. Implicit from name but no direct usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compliance_wallet_entityCInspect

Compound Web2+Web3 screen: wallet address + entity name in one call with convergence detection. Bridges blockchain wallets to traditional sanctions databases. Costs $0.003 USDC via x402.

ParametersJSON Schema
NameRequiredDescriptionDefault
listNoEntity list filter (default: all)
nameNoEntity or person name for cross-reference (optional)
chainNoChain hint: btc, eth, trx, auto (default: auto)
addressYesBlockchain wallet address (any chain)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It discloses cost ($0.003 USDC via x402) but omits important behavioral traits such as data source freshness, response format, rate limits, or what constitutes a 'convergence'. This is insufficient for a compliance-critical tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, each adding unique information: compound screen, bridging concept, and cost. No wasted words. Front-loaded with core purpose. Ideal length for quick scanning.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers the high-level concept but lacks operational completeness. Without an output schema, it should explain return values, error cases, or how convergence detection results are presented. For a compliance tool used in serious contexts, this is a significant gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the baseline is 3. The description mentions 'wallet address' and 'entity name' but does not add value beyond the parameter descriptions in the schema (e.g., list enum, chain hint). It does not compensate for the lack of output schema or nested objects.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it is a compound Web2+Web3 screen combining wallet address and entity name with convergence detection. It bridges blockchain wallets to traditional sanctions databases. However, it does not explicitly differentiate from sibling tools like compliance_wallet or compliance_watchlist, making it slightly less than fully distinct.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It does not state prerequisites, when not to use, or mention any of the eight sibling compliance tools. The agent must infer usage from the function description alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compliance_watchlistAInspect

Screen any entity against comprehensive global watchlist records (OFAC, UN, EU, PEP, Interpol, crypto). Costs $0.005 USDC via x402.

ParametersJSON Schema
NameRequiredDescriptionDefault
listNoWatchlist to search. Default: all
queryYesEntity name to screen (e.g. 'Vladimir Putin', 'Tornado Cash')
thresholdNoMatch confidence 0.0-1.0. Default: 0.75
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses cost ($0.005 USDC via x402) and the scope of screening, but does not mention rate limits, authentication, error handling, or return format. Adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence that is front-loaded with the action and scope, includes specific details (list types, cost), and contains no filler. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the parameter count (3) and absence of output schema and annotations, the description provides a high-level overview but lacks details on output format or how results are presented. For a screening tool, this is somewhat incomplete compared to richer definitions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds value by naming specific watchlist types (OFAC, UN, etc.) that enrich the enum for 'list', though query and threshold are adequately described in the schema. This extra context justifies a 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('screen any entity') and the resource ('global watchlist records'), listing specific lists (OFAC, UN, etc.) and mentioning cost. This distinguishes it from sibling tools like compliance_jurisdiction_risk and compliance_wallet.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool vs alternatives. While the description mentions specific watchlist types, it does not compare with siblings or state when not to use it, leaving the agent without clear differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

country_briefAInspect

Complete country intelligence brief: compliance risk assessment + live economic data. For Mauritius (MU) includes all 7 oracle feeds (forex, macro, monetary, stock market, weather, fuel). Replaces 8 API calls. Costs $0.010 USDC via x402.

ParametersJSON Schema
NameRequiredDescriptionDefault
country_codeYesISO 3166-1 alpha-2 country code (e.g. MU, SG, VG)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description carries full burden. It discloses cost and that for Mauritius it includes 7 feeds, but does not clarify behavior for other countries (e.g., if fewer feeds are included) or any rate limits or authentication requirements. The cost disclosure is valuable but gaps remain.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, each adding value: purpose, specific content for Mauritius, and cost/benefit. Front-loaded with the main function. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description adequately summarizes the tool's content (compliance risk assessment, live economic data, 7 feeds for MU). However, it does not describe the output format or structure, which would help an agent interpret results. For a comprehensive brief tool, slightly more detail on what the 'brief' contains would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with the `country_code` parameter already described as an ISO code. The description adds examples (MU, SG, VG) but no additional semantic meaning beyond what the schema provides. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides a 'Complete country intelligence brief' with compliance risk assessment and live economic data, specifying it includes all 7 oracle feeds for Mauritius and replaces 8 API calls. This distinguishes it from sibling tools like `compliance_jurisdiction_risk` and `country_snapshot`.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage via cost ($0.010 USDC) and consolidation benefit ('Replaces 8 API calls'), but does not explicitly state when to use this tool over individual feed tools. The context suggests it is for comprehensive data, but clearer guidance would improve it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

country_snapshotAInspect

Get complete Mauritius economic pulse — ALL feeds in one call. Costs $0.005 USDC via x402.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description adds cost and batch-retrieval behavior not present in annotations, but lacks details on response size, rate limits, or exact feeds included.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single clear sentence with essential information (purpose, cost) and no redundancies.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Missing output schema and detailed list of feeds, but description is adequate for a simple aggregated-data tool given sibling tools hint at available feeds.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With zero parameters and 100% schema coverage, the description adds meaning by specifying 'Mauritius' and 'ALL feeds', which is sufficient for a no-param tool.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool retrieves the complete Mauritius economic pulse aggregating all feeds in one call, distinguishing it from sibling tools like 'country_brief' or individual feed tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description implies use when needing all economic feeds in one call and mentions cost, providing context for when to use vs individual calls, but does not explicitly name alternatives or when-not-to-use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

due_diligenceAInspect

Compound entity screening package: watchlist screening + jurisdiction risk for detected nationalities + Mauritius FSC check + forex context + composite risk score. Replaces 5 separate API calls. Costs $0.010 USDC via x402.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesEntity name for due diligence (e.g. 'Acme Corp', 'John Smith')
include_forexNoInclude forex rates for detected jurisdictions (default: true)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so description carries full burden. It discloses cost ($0.010 USDC via x402) and that it is a compound package, but does not state whether it is read-only or destructive, nor does it explain how the composite score is derived. Adds some value but gaps remain.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences, front-loaded with key information (purpose, components, cost). No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given complexity (compound of 5 checks) and no output schema, description explains what it does and cost but does not describe the output structure. For an agent to select correctly, knowing it returns a composite risk score and forex context is likely sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, baseline is 3. The description mentions 'forex context' corresponding to include_forex parameter, and query is described as entity name, which largely mirrors schema. No additional meaning beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description specifies it is a compound screening package including watchlist screening, jurisdiction risk, Mauritius FSC check, forex context, and composite risk score, clearly distinguishing it from individual sibling tools like compliance_watchlist and compliance_jurisdiction_risk.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description states it replaces 5 separate API calls, implying it is for comprehensive due diligence. While it does not explicitly list alternatives, the siblings provide individual components, suggesting when not to use it. Slightly above average guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

facilitator_kyaAInspect

Know Your Agent — ERC-8004 registry lookup + sanctions screening + signed JWT attestation for any wallet address. Returns agent registration status, operator wallet, screening results, and coldStartSignals. FREE.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesWallet address to check (e.g. 0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Describes basic operations (lookup, screening, attestation) and return items, but lacks details on permissions, rate limits, or read-only nature. Annotations absent, so description partially carries burden.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, front-loaded with purpose, no unnecessary words. Efficient and scannable.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given one simple parameter and no output schema, the description covers essential behavior and return values. Missing error handling or invalid address behavior, but overall adequate for a straightforward lookup tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers the single parameter with a clear example and description. Tool description adds 'any wallet address' but does not significantly enhance semantics beyond schema. Schema coverage is 100%, baseline is 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool performs an ERC-8004 registry lookup, sanctions screening, and JWT attestation for any wallet address. Distinguishes from siblings because it is agent-specific, unlike other compliance tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives among the many sibling tools. Does not mention when not to use or any prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

facilitator_supportedAInspect

Get SENTINEL facilitator capabilities — supported payment schemes, networks, assets, and compliance features. FREE.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It indicates a read operation ('Get') and mentions 'FREE', which may imply no cost. However, it doesn't disclose authentication needs, rate limits, or any side effects. Very minimal behavioral disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with key information. No wasted words. However, it could benefit from additional context about what SENTINEL is or the return format. Still, very concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no parameters and no output schema, the description covers the essential purpose and output categories. Lacks details on return format or any usage constraints, but given zero complexity, it is largely complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has zero parameters, so no parameter documentation needed. Description adds meaning by enumerating what the tool returns (payment schemes, networks, assets, compliance features), which helps users understand the output without a schema. Baseline of 4 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it retrieves 'SENTINEL facilitator capabilities' and lists specific areas (payment schemes, networks, assets, compliance features). The term 'FREE' adds value. This distinguishes it from sibling tools like compliance_jurisdiction_risk or compliance_wallet which focus on different compliance aspects.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implied usage: use when you need to know what a facilitator supports. No explicit when-to-use or when-not-to-use statements. No mention of alternatives, though sibling names provide some context. The word 'FREE' might hint at cost considerations but lacks clear guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forex_ratesAInspect

Get MUR exchange rates from Bank of Mauritius. Costs $0.001 USDC via x402.

ParametersJSON Schema
NameRequiredDescriptionDefault
currencyNoOptional ISO currency code (e.g. USD, EUR). Omit for all rates.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses cost ($0.001 USDC via x402), a critical behavioral trait beyond the schema. Without annotations, this adds value, though other behaviors (e.g., rate limits, error handling) are not covered.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose and cost. No superfluous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one optional param and no output schema, the description is sufficiently complete. Missing return format details, but not critical given low complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers the single parameter with examples. Description adds no extra meaning beyond stating it's MUR-centric. With 100% schema coverage, baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool retrieves MUR exchange rates from a specific source (Bank of Mauritius). The verb 'Get' and resource 'exchange rates' are explicit, distinguishing it from sibling compliance tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use or alternatives. The mention of cost implies it's a paid tool, but no when-not-to-use or comparison with other rate tools is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fuel_pricesAInspect

Get petroleum retail prices from STC Mauritius. Costs $0.001 USDC via x402.

ParametersJSON Schema
NameRequiredDescriptionDefault
productNoOptional product (mogas, gasoil, lpg). Omit for all.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully disclose behavior. It only mentions the cost and payment method ('x402'), but omits whether the tool is read-only, destructive, requires authentication, or has rate limits. Basic mutability assumptions (read) are not confirmed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the primary purpose, followed by a critical cost detail. No unnecessary words—every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (single optional parameter, no output schema), the description provides the essential information: purpose and cost. It could mention the return format, but the absence is not critical for invocation. The cost detail adds valuable context beyond the schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (one parameter fully described). The description adds no new information beyond the schema's parameter description ('Optional product (mogas, gasoil, lpg). Omit for all.'). Baseline 3 is appropriate when schema covers parameters well.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get petroleum retail prices from STC Mauritius', which specifies the verb (Get), resource (petroleum retail prices), and source (STC Mauritius). This distinguishes it from all sibling tools, none of which mention fuel prices.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool vs alternatives. The cost note ('Costs $0.001 USDC via x402') is provided but does not help with usage decisions. However, the tool's specific purpose implies use when needing Mauritian fuel prices, and no sibling directly competes.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

macro_indicatorsAInspect

Get Mauritius macro-economic indicators (GDP, CPI, unemployment, tourism). Costs $0.002 USDC via x402.

ParametersJSON Schema
NameRequiredDescriptionDefault
indicatorNoOptional indicator (gdp, cpi, unemployment, tourism, fdi, trade, population). Omit for all.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses cost ($0.002 USDC via x402), a key behavioral trait. However, no annotations exist, and missing details on data freshness, rate limits, or idempotency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise with no superfluous words. Information is front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete for a simple data retrieval tool with one optional parameter. Cost and indicator types are covered. Return structure could be mentioned but not essential.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so parameters are defined. Description lists examples but adds minimal value beyond the schema's allowed values.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and the specific resource 'Mauritius macro-economic indicators' with examples. No sibling tool has overlapping functionality, making purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives (e.g., country_snapshot, country_brief). Lacks exclusions or context for optimal usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

monetary_policyAInspect

Get Bank of Mauritius Key Repo Rate and Prime Lending Rates. Costs $0.002 USDC via x402.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses the cost ($0.002 USDC via x402), which is a behavioral trait beyond the name and title. However, no other behavioral details (e.g., rate limits, authentication needs, data freshness) are provided, and annotations are absent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences long, front-loaded with the purpose, and contains no extraneous information. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter tool with no output schema, the description adequately conveys what the tool returns (the two rates) and includes a notable cost detail. Additional context about data freshness could improve it, but it is sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, so the baseline is 4. The description adds meaning by specifying the output (Key Repo Rate and Prime Lending Rates), which complements the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and clearly identifies the resource ('Bank of Mauritius Key Repo Rate and Prime Lending Rates'). This distinguishes the tool from siblings like 'forex_rates' and 'macro_indicators'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide any guidance on when to use this tool versus alternatives. No explicit context, exclusions, or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

monitor_checkAInspect

Check monitoring subscription status. Re-screens the subscribed wallet/entity against the latest database and returns current alert state. Costs $0.003 USDC via x402.

ParametersJSON Schema
NameRequiredDescriptionDefault
subscription_idYesSubscription ID (MON-XXXXXX-XXXXXXXX)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the burden. It discloses the cost of $0.003 USDC via x402, which is a behavioral trait. However, it doesn't mention side effects, idempotency, or whether it modifies state. The description adds some value beyond basic purpose but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three short sentences with no redundancy. The first sentence states the purpose, the second explains the mechanism, and the third mentions cost. Front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple tool (one required parameter, no output schema), the description covers the main aspects: purpose, mechanism, and cost. It does not describe the return format, but since no output schema exists, this is a minor gap. Overall fairly complete for a simple check tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with one parameter. The description includes the subscription ID format (MON-XXXXXX-XXXXXXXX) in the schema property description, which is sufficient. The tool description does not add additional meaning beyond what is already in the schema, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'check' and the resource 'monitoring subscription status', and explains the action of re-screening and returning alert state. It distinguishes from sibling tool 'monitor_subscribe' by focusing on checking status rather than subscribing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives like 'monitor_subscribe' or other tools. The description implies usage for checking current alert state but lacks when-not-to-use or alternative recommendations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

monitor_subscribeAInspect

Subscribe a wallet or entity for 30-day continuous monitoring. If the target appears on any sanctions, PEP, or crypto blacklist, the status flips to 'alerted'. Optional webhook for push notifications. Costs $0.010 USDC via x402.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesMonitor a wallet address or entity name
chainNoChain hint for wallets (default: auto)
labelNoYour internal reference label
valueYesThe wallet address or entity name to monitor
webhook_urlNoPOST alert notifications to this URL when status changes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses key behaviors: 30-day duration, cost ($0.010 USDC via x402), alert condition (sanctions/PEP/crypto blacklist), and optional webhook. With no annotations, this provides solid transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences, front-loaded with main purpose, no unnecessary words. Perfectly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers essential aspects: duration, alert triggering, cost, webhook option. Lacks details on response (e.g., subscription ID) and unsubscription process, but acceptable for a subscription tool with no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema descriptions cover all 5 parameters (100% coverage). Description adds context about monitoring and alerts but doesn't significantly enhance parameter meaning beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it subscribes a wallet or entity for continuous monitoring over 30 days. Distinct from sibling tools like monitor_check (status check) and compliance_watchlist (listing).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies use case for continuous monitoring but does not explicitly state when to use or alternatives. No comparison to monitor_check or other siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

network_scanAInspect

Entity relationship intelligence: finds all watchlist hits, traverses entity relation graph, screens connected entities, produces risk network map with composite scoring per node. Replaces 10-20 API calls + manual graph analysis. Costs $0.015 USDC via x402.

ParametersJSON Schema
NameRequiredDescriptionDefault
depthNoGraph traversal depth: 1 (direct connections) or 2 (connections of connections). Default: 1
queryYesEntity name to scan (e.g. 'Global Capital Ltd')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, but the description details output (risk network map with composite scoring) and cost behavior. It does not mention performance or error cases but covers key aspects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences, each providing value: purpose, replacement benefit, and cost. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema, the description adequately explains the return (risk network map with composite scoring). It covers purpose, behavior, cost, and alternatives, leaving no critical gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds minimal extra meaning beyond the schema, only restating depth options. No additional parameter-specific context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it is an entity relationship intelligence tool that finds watchlist hits, traverses graphs, and produces risk network maps. It distinguishes itself from siblings like compliance_watchlist (single entity check) by focusing on network traversal.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for comprehensive network analysis by stating it replaces 10-20 API calls. It also mentions cost ($0.015), indicating when to be cautious. However, it does not explicitly state alternatives or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

stock_marketAInspect

Get SEMDEX and SEM indices from Stock Exchange of Mauritius. Costs $0.001 USDC via x402.

ParametersJSON Schema
NameRequiredDescriptionDefault
indexNoOptional index name (semdex, sem10, demex). Omit for all.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses cost ($0.001 USDC via x402), but no annotations exist and other behavioral traits (e.g., rate limits, data freshness) are not mentioned.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no unnecessary words, front-loaded with purpose and cost.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple one-parameter retrieval tool; could mention output format or update frequency, but overall complete enough.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with one parameter's description; description adds cost info but nothing beyond schema for parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it retrieves SEMDEX and SEM indices from Stock Exchange of Mauritius, distinguishing it from sibling tools like compliance_jurisdiction_risk or forex_rates.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use vs alternatives; usage is implied by the tool's specific data focus.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

transaction_screenAInspect

Cross-border transaction pre-screening: checks sender + receiver against watchlists, evaluates jurisdiction risk, provides forex corridor rate, returns PROCEED/REVIEW/FLAG/BLOCK recommendation. Replaces 6 API calls. Costs $0.008 USDC via x402.

ParametersJSON Schema
NameRequiredDescriptionDefault
currency_toNoTarget currency ISO code (default: MUR)
sender_nameYesSender entity name
currency_fromNoSource currency ISO code for corridor rate
receiver_nameYesReceiver entity name
sender_countryNoSender ISO country code (e.g. US, MU)
receiver_countryNoReceiver ISO country code
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description adds significant context: it combines multiple checks, provides a recommendation, and mentions cost ($0.008 via x402). It does not detail how the recommendation is determined or whether it modifies data, but these are less critical for a screening tool. No contradictions with annotations (none exist).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences: first defines the core functionality, second highlights efficiency (replaces 6 API calls), third adds cost. Every sentence adds value without redundancy, and the most important information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (multiple checks, no output schema), the description covers key aspects: inputs, the aggregated nature, output type (recommendation), and cost. It could be improved by stating whether the output includes the forex rate or just the recommendation, but it is still fairly complete for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage, so the baseline is 3. The description adds context that parameters like currency_from and currency_to relate to the forex corridor rate, but it does not provide additional parameter-specific details beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: cross-border transaction pre-screening with specific checks (watchlists, jurisdiction risk, forex rate) and a clear output recommendation (PROCEED/REVIEW/FLAG/BLOCK). It distinguishes itself from sibling tools by explicitly noting it replaces 6 API calls, indicating it is an aggregated solution.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for cross-border transactions but does not explicitly state when to use this tool vs. alternatives like individual compliance checks or forex rate lookups. No exclusions or alternative tool references are provided, leaving some ambiguity for an AI agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

travel_rule_screenAInspect

FATF R16 Travel Rule compliance: screens both originator and beneficiary wallets, entity names, and jurisdictions in one call. Returns structured compliance packet with unique packetId that counter-parties can verify. Costs $0.005 USDC via x402.

ParametersJSON Schema
NameRequiredDescriptionDefault
purposeNoTransaction purpose
amount_usdNoTransaction amount in USD (triggers threshold check)
originator_nameNoOriginator entity/person name
beneficiary_nameNoBeneficiary entity/person name
originator_addressYesOriginator wallet address
originator_countryNoOriginator ISO country code
beneficiary_addressYesBeneficiary wallet address
beneficiary_countryNoBeneficiary ISO country code
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adds cost ($0.005 USDC via x402) and output structure (structured compliance packet with unique packetId). However, it does not disclose whether the operation is read-only, idempotent, or any error conditions. The cost information is useful but insufficient for a complete behavioral profile.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences long, concise, and front-loaded with the core purpose. It covers function, output, and cost without extraneous detail. Minor improvement could be paragraph or bullet structure, but it's efficient for its length.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (8 parameters, compliance domain, regulatory context) and lack of output schema, the description provides a good overview but leaves gaps. It specifies the output is a 'structured compliance packet' but does not detail its fields or structure. No mention of prerequisites, error handling, or rate limits. The cost mention is positive but not sufficient for full completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema coverage is 100%, so baseline is 3. The description mentions screening originator/beneficiary wallets, entity names, and jurisdictions, which maps to the parameters. But the schema already includes individual parameter descriptions for each field. The description adds no new semantic meaning beyond what's already in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: FATF R16 Travel Rule compliance screening of both originator and beneficiary wallets, entity names, and jurisdictions in one call. The specific verb 'screens' and resource 'compliance' along with the regulation name make the purpose unambiguous. While sibling tools like compliance_wallet or compliance_watchlist exist, the description's focus on bilateral screening and regulatory compliance effectively differentiates it.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for Travel Rule compliance screening but provides no explicit guidance on when to use this tool versus its siblings (e.g., compliance_wallet, compliance_watchlist, transaction_screen). An agent would need to infer from context that this is for bilateral originator/beneficiary checks. No exclusions or alternatives are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

weatherAInspect

Get current weather from all Mauritius Met Service stations. Costs $0.001 USDC via x402.

ParametersJSON Schema
NameRequiredDescriptionDefault
stationNoOptional station name (e.g. vacoas, plaisance). Omit for all.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that the tool fetches current weather and costs money, but does not mention rate limits, authentication, or data freshness. For a simple read operation, it is adequate but could be more thorough.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no extraneous information. Purpose is front-loaded. Each sentence serves a clear role: stating the function and noting the cost.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool is simple with one parameter and no output schema. The description explains what it does and the cost, but omits details about the output format (e.g., temperature, conditions). For an agent to use it effectively, output specifics would be helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, providing baseline 3. The description adds 'all Mauritius Met Service stations' context but does not elaborate on the station parameter beyond the schema's existing description. No additional semantics are provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb 'Get' and resource 'current weather from all Mauritius Met Service stations'. It clearly distinguishes from sibling tools which cover compliance, economic, and monitoring domains.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions cost ($0.001 USDC via x402), implying it should be used only when necessary. However, no explicit alternatives or 'when-not' guidance is given, though no sibling weather tools exist.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources