SENTINEL Compliance Intelligence
Server Details
AML/CFT compliance oracle: wallet screening, sanctions, PEPs, jurisdiction risk.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.7/5 across 22 of 22 tools scored.
Each tool has a clearly distinct purpose, covering jurisdiction risk, wallet/entity screening, due diligence, transaction monitoring, country intelligence, and facilitator operations. Overlaps are minimal and intentional (e.g., compound tools vs. atomic ones).
All tool names follow a consistent 'domain_action' pattern in snake_case (e.g., compliance_wallet, country_brief, monitor_subscribe). Acronyms like KYA are integrated cleanly without style breaks.
22 tools are well-scoped for a comprehensive compliance intelligence platform, covering core screening, monitoring, country data, and transaction checks without unnecessary redundancy.
The tool surface covers the full compliance workflow: jurisdiction risk, entity/wallet screening, due diligence, transaction and travel rule checks, continuous monitoring, network analysis, and Mauritius-specific intelligence. No obvious gaps.
Available Tools
22 toolscompliance_jurisdiction_riskAInspect
Get composite risk score for any of 179 countries — FATF grey/blacklist, CPI, Basel AML Index. Costs $0.001 USDC via x402.
| Name | Required | Description | Default |
|---|---|---|---|
| country_code | Yes | ISO 3166-1 alpha-2 country code (e.g. MU, US, RU, IR) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the cost and implies a read-only operation ('Get'), but lacks details on rate limits or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no unnecessary words, front-loading the purpose and adding cost as a key detail.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description omits return format and error information, which is a gap for an API tool. However, it adequately explains the tool's core function.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the description does not add meaningful detail beyond the schema's example of country codes. The baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get composite risk score'), the resource ('any of 179 countries'), and the components (FATF, CPI, Basel AML Index), distinguishing it from sibling tools like compliance_mauritius.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description hints at usage (many countries, cost) but does not explicitly guide when to use this tool versus alternatives like compliance_mauritius or other compliance tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compliance_mauritiusAInspect
Search Mauritius FSC registry + ICIJ offshore leak connections. Costs $0.005 USDC via x402.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Entity or person name to search |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses the cost ($0.005 USDC via x402) and the data sources, which adds useful context beyond the tool name. However, it does not mention whether the operation is read-only, rate limits, or error behavior, which would be helpful given no annotations are present.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with two sentences, each serving a clear purpose: stating the function and noting the cost. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple search tool with one parameter and no output schema, the description covers the core functionality and cost. It lacks details on result format or failure modes, but the context is fairly complete given the simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The parameter 'query' has a schema description stating 'Entity or person name to search'. The tool description adds the context that it searches specific registries (Mauritius FSC and ICIJ), which enriches the meaning beyond the schema alone. With 100% schema coverage, the baseline is 3, and the added context justifies a 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's function: search Mauritius FSC registry and ICIJ offshore leak connections. It uses a specific verb-resource pair and distinguishes itself from sibling compliance tools like compliance_jurisdiction_risk.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus others. It does not mention alternatives, exclusions, or prerequisites for usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compliance_walletAInspect
Screen a blockchain wallet address against sanctioned/blacklisted crypto addresses (OFAC SDN, USDT Blacklist, USDC Blacklist, Ransomwhere, OpenSanctions, UK OFSI). Costs $0.003 USDC via x402.
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Chain hint: btc, eth, trx, auto (default: auto) | |
| address | Yes | Blockchain wallet address (any chain — BTC, ETH, TRX, etc.) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses cost, payment mechanism, and the specific blacklists checked. However, it omits details like rate limits, response format, or potential errors.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence packed with relevant information, but it could be more readable with slight restructuring (e.g., splitting cost from blacklist enumeration).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description lacks details about the output format (e.g., boolean/risk score) and error handling. Given the cost and the presence of siblings, more completeness would help the agent decide.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds context by stating 'any chain' and linking the address parameter to the blacklist screening purpose, which is helpful beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the specific action: 'Screen a blockchain wallet address against sanctioned/blacklisted crypto addresses', listing several blacklists (OFAC SDN, USDT Blacklist, etc.). It clearly distinguishes from siblings like compliance_wallet_entity and compliance_watchlist.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions the cost and payment method ($0.003 USDC via x402), which implies when it's appropriate (when cost is acceptable). However, it does not explicitly state when to use this tool vs. alternatives like compliance_jurisdiction_risk or compliance_watchlist.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compliance_wallet_entityAInspect
Compound Web2+Web3 screen: wallet address + entity name in one call with convergence detection. Bridges blockchain wallets to traditional sanctions databases. Costs $0.003 USDC via x402.
| Name | Required | Description | Default |
|---|---|---|---|
| list | No | Entity list filter (default: all) | |
| name | No | Entity or person name for cross-reference (optional) | |
| chain | No | Chain hint: btc, eth, trx, auto (default: auto) | |
| address | Yes | Blockchain wallet address (any chain) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description bears full burden. It mentions cost ($0.003 USDC) and the bridging of blockchain to traditional databases, but does not disclose failure modes, idempotency, or rate limits. Adequate but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences that front-load key functionality (combined screen, convergence detection) and include cost. No redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers the core purpose and basic parameters, but given the absence of an output schema, it could briefly mention expected response (e.g., match status, risk score). Still, it is mostly complete for a screening tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for all parameters. The description adds context about the combined screen but does not enhance individual parameter meaning beyond schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it performs a combined Web2+Web3 screen of wallet address and entity name with convergence detection. It distinguishes itself from sibling tools like compliance_wallet by explicitly mentioning the dual screen and convergence.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives like compliance_wallet or compliance_watchlist. The description implies usage for combined screening, but lacks when-not or sibling differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compliance_watchlistBInspect
Screen any entity against comprehensive global watchlist records (OFAC, UN, EU, PEP, Interpol, crypto). Costs $0.005 USDC via x402.
| Name | Required | Description | Default |
|---|---|---|---|
| list | No | Watchlist to search. Default: all | |
| query | Yes | Entity name to screen (e.g. 'Vladimir Putin', 'Tornado Cash') | |
| threshold | No | Match confidence 0.0-1.0. Default: 0.75 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description bears full burden. It mentions cost and that it screens watchlists, but fails to disclose return behavior, side effects, or required permissions. Basic transparency is lacking.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: first specifies action and scope, second notes cost. No wasted words, front-loaded with key information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 3 params, no output schema, and no annotations, the description only covers purpose and cost. Missing behavioral details like output format, match results, and usage constraints.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds some context (e.g., list of watchlist types) but largely repeats enum values. No additional meaning beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool screens entities against comprehensive global watchlists including specific types like OFAC, UN, EU, PEP, Interpol, and crypto. It distinguishes from siblings like compliance_jurisdiction_risk (jurisdictions) and transaction_screen (transactions) by focusing on entity watchlist screening.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives, or any exclusion criteria. The cost mention is a minor constraint but does not help with tool selection among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
country_briefAInspect
Complete country intelligence brief: compliance risk assessment + live economic data. For Mauritius (MU) includes all 7 oracle feeds (forex, macro, monetary, stock market, weather, fuel). Replaces 8 API calls. Costs $0.010 USDC via x402.
| Name | Required | Description | Default |
|---|---|---|---|
| country_code | Yes | ISO 3166-1 alpha-2 country code (e.g. MU, SG, VG) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully disclose behavior. It adds cost ($0.010 USDC via x402) and notes that for Mauritius it includes all 7 oracle feeds, implying varying completeness for other countries. However, it does not clarify if the tool is read-only, whether it requires authentication, or what happens for country codes not explicitly supported.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise: two sentences providing the core purpose, scope, and cost, with no unnecessary words. It is front-loaded and easy to scan.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema, the description should hint at the return value format. It mentions 'compliance risk assessment + live economic data' and '7 oracle feeds' for Mauritius, but does not describe the structure of the brief or what fields to expect, leaving a gap for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for the single parameter 'country_code' (ISO 3166-1 alpha-2). The description only repeats examples already in the schema, adding no new semantic meaning beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Complete country intelligence brief: compliance risk assessment + live economic data' and specifies that it replaces 8 API calls, distinguishing it from sibling tools that focus on individual aspects like compliance_jurisdiction_risk or country_snapshot.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when a comprehensive brief is needed rather than individual data points, and mentions it replaces 8 API calls. However, it does not explicitly state when not to use or directly name alternatives among the siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
country_snapshotAInspect
Get complete Mauritius economic pulse — ALL feeds in one call. Costs $0.005 USDC via x402.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description mentions a behavioral trait: cost ($0.005 USDC via x402). However, it does not disclose other behavioral aspects like authentication, rate limits, or data freshness. Annotations are absent, so the description carries the full burden, which is partially met.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with purpose. Every word adds value—no filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and no output schema, the description adequately defines the tool's purpose but is vague on return details (e.g., what feeds are included, format). It could specify the nature of the economic pulse for complete context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are no parameters, so schema coverage is 100%. With 0 parameters, the baseline is 4. The description does not need to add parameter details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it gets 'complete Mauritius economic pulse' and 'ALL feeds in one call', identifying the specific resource (Mauritius economic data) and the action (aggregate retrieval). This distinguishes it from siblings like 'macro_indicators' or 'country_brief'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies it is for a broad economic overview of Mauritius but does not explicitly state when to use this tool over siblings or provide exclusions. No guidance on prerequisites or context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
due_diligenceAInspect
Compound entity screening package: watchlist screening + jurisdiction risk for detected nationalities + Mauritius FSC check + forex context + composite risk score. Replaces 5 separate API calls. Costs $0.010 USDC via x402.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Entity name for due diligence (e.g. 'Acme Corp', 'John Smith') | |
| include_forex | No | Include forex rates for detected jurisdictions (default: true) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose behavioral traits. It mentions cost ($0.010 USDC via x402) and the composite nature, which is helpful. However, it does not indicate whether the tool is read-only, error handling, rate limits, or what happens to the output. The cost info adds transparency but not full behavioral disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences: the first lists all screening components, the second states the value proposition and cost. Every word contributes value with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite the complexity of a compound tool with 5 components, no output schema is provided and the description only mentions a 'composite risk score' without detailing the return structure. The agent lacks critical information about what fields to expect, making this incomplete for effective invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with both parameters described. The description adds context that 'include_forex' relates to 'forex context' mentioned in the summary, but does not provide deeper semantics beyond what the schema already conveys. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it is a compound screening package combining watchlist, jurisdiction risk, Mauritius FSC, forex, and composite risk score. It explicitly distinguishes from sibling tools by positioning itself as a replacement for 5 separate API calls, leaving no ambiguity about its purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for comprehensive due diligence by stating it replaces 5 separate API calls, but it does not explicitly specify when to use this tool versus individual siblings like compliance_watchlist or compliance_jurisdiction_risk. The agent must infer that this is for a full screening, while single checks should use other tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
facilitator_kyaAInspect
Know Your Agent — ERC-8004 registry lookup + sanctions screening + signed JWT attestation for any wallet address. Returns agent registration status, operator wallet, screening results, and coldStartSignals. FREE.
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | Wallet address to check (e.g. 0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses that the tool is free, performs a lookup and screening, and returns a signed JWT attestation. It does not mention side effects, authentication requirements, or rate limits, but for a read-heavy lookup tool this is adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that is front-loaded with the tool's purpose and packs all key information (registry lookup, sanctions, JWT, free) with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema and no annotations, the description fully explains the tool's purpose, inputs, outputs, and cost. It is complete for a single-parameter lookup tool with moderate complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (the input parameter is documented as a wallet address with an example). The tool description does not add any additional meaning beyond what the schema already provides, so a baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it's a 'Know Your Agent' tool that combines ERC-8004 registry lookup, sanctions screening, and JWT attestation for any wallet address. It lists specific return values (registration status, operator wallet, screening results, coldStartSignals), distinguishing it from sibling compliance tools that are more granular.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description names the tool 'Know Your Agent' and lists its components, implying it is used for comprehensive agent due diligence. However, it does not explicitly state when to use this tool over siblings like 'compliance_wallet' or 'compliance_watchlist', leaving some ambiguity for an AI agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
facilitator_supportedBInspect
Get SENTINEL facilitator capabilities — supported payment schemes, networks, assets, and compliance features. FREE.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description should disclose behavioral traits. It only adds the word 'FREE' (likely cost indicator) but does not mention idempotency, side effects, data freshness, or whether any setup is required. The tool likely returns static data, but this is not confirmed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very short (one sentence) and front-loaded with the core action. It could be slightly more structured (e.g., list items) but is efficient and avoids fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and no output schema, the description partially explains what the tool returns (payment schemes, networks, assets, compliance features). However, it lacks detail on the format, scope (e.g., global or regional), and how to interpret results. For a tool with zero complexity, it is adequate but not thorough.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so the description does not need to explain parameter semantics. Baseline score of 4 is appropriate given no parameters exist and schema coverage is 100%.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves SENTINEL facilitator capabilities, listing specific categories (payment schemes, networks, assets, compliance features). The verb 'Get' and the noun 'capabilities' make the purpose unmistakable, and it distinguishes from sibling tools that focus on compliance or data queries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus siblings like 'facilitator_kya' or 'compliance_wallet'. The description does not mention alternatives or exclusion criteria, leaving the agent to infer usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forex_ratesBInspect
Get MUR exchange rates from Bank of Mauritius. Costs $0.001 USDC via x402.
| Name | Required | Description | Default |
|---|---|---|---|
| currency | No | Optional ISO currency code (e.g. USD, EUR). Omit for all rates. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description bears full responsibility. It discloses a behavioral trait: cost ($0.001 USDC via x402). However, it omits other relevant traits like rate limits, data freshness, or permission requirements, leaving gaps the agent must infer.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The two-sentence description is efficient and front-loaded: first sentence states purpose, second adds cost. No unnecessary words, though it could be slightly more structured (e.g., bullet points for clarity).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one optional parameter, no output schema), the description covers the core action but does not explain the response format or what happens when 'currency' is omitted. Since the schema covers parameter behavior, the main gap is the output description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and already describes the parameter well ('Optional ISO currency code'). The description adds no additional semantics; it only mentions getting MUR rates, which is already implied. Baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool retrieves MUR exchange rates from the Bank of Mauritius, with a specific verb ('Get') and resource ('MUR exchange rates'). It distinguishes itself from sibling tools like 'stock_market' or 'fuel_prices' by focusing on forex data from a specific source.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool over alternatives (e.g., for specific currencies besides MUR, or how it differs from other data tools). The cost mention is useful but not contextualized within the tool's broader use case.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fuel_pricesBInspect
Get petroleum retail prices from STC Mauritius. Costs $0.001 USDC via x402.
| Name | Required | Description | Default |
|---|---|---|---|
| product | No | Optional product (mogas, gasoil, lpg). Omit for all. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses cost and payment method (x402) but does not mention read-only nature, authentication requirements, or response format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences front-load purpose and cost. No wasted words, though brevity sacrifices some completeness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple tool with one optional param and no output schema. Missing explanation of 'x402' and what the response contains, but sufficient given the tool's low complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% for the single parameter 'product'. Description does not add extra meaning beyond the schema's own description, so baseline 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly specifies verb 'Get', resource 'petroleum retail prices', and source 'STC Mauritius'. Unambiguous and distinguishes from sibling tools like forex_rates or stock_market.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs alternatives, no prerequisites or exclusions. Only mentions cost but not contextual usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
govdata_searchAInspect
Search 812+ Mauritius government datasets. Costs $0.001 USDC via x402.
| Name | Required | Description | Default |
|---|---|---|---|
| q | Yes | Search query for government datasets |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must cover behavioral traits. It mentions cost ($0.001 USDC via x402), a key behavioral aspect, but does not disclose other traits like read-only nature, rate limits, or result format. Important gaps remain.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no fluff. Front-loaded with purpose, includes critical cost information. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool is a simple search with one parameter; description covers domain and cost. However, lacks details on output format, search syntax, or pagination. Adequate but incomplete for full agent decision-making.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema covers 100% of parameters with description 'Search query for government datasets'. Description does not add extra meaning beyond that. Baseline 3 applies as description adds no additional semantic value for the parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it searches '812+ Mauritius government datasets', specifying verb (search) and resource (datasets with count and location). Distinguishes from all sibling tools, which cover compliance, finance, weather, etc.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage for searching Mauritius government datasets, but no explicit guidance on when to use vs alternatives or when not to use. Siblings are distinct, reducing ambiguity, but still lacking explicit directives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
macro_indicatorsBInspect
Get Mauritius macro-economic indicators (GDP, CPI, unemployment, tourism). Costs $0.002 USDC via x402.
| Name | Required | Description | Default |
|---|---|---|---|
| indicator | No | Optional indicator (gdp, cpi, unemployment, tourism, fdi, trade, population). Omit for all. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so the description carries full burden. It only discloses the cost ($0.002 USDC via x402) but no other behavioral traits like data freshness, rate limits, or whether it's a read-only operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences are efficient and front-loaded with purpose. Cost information is relevant. Minor improvement would be structuring for readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one optional parameter and no output schema, the description covers purpose and cost. However, it omits what the tool returns (e.g., data format) which could be helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with the parameter 'indicator' already listing options. The description repeats 'GDP, CPI, unemployment, tourism' but adds no new semantic meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's verb 'Get' and resource 'Mauritius macro-economic indicators' with specific examples (GDP, CPI, unemployment, tourism). It distinguishes from sibling tools like country_brief or country_snapshot by focusing on macro indicators.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. The description only mentions the cost, not contextual usage like when to choose this over other indicators tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
monetary_policyAInspect
Get Bank of Mauritius Key Repo Rate and Prime Lending Rates. Costs $0.002 USDC via x402.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses the cost ($0.002 USDC via x402), a key behavioral trait for agent decisions, but lacks details on data freshness, response format, or any rate limits. With no annotations, the description carries the full burden and partially fulfills it.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, each providing essential information: purpose and cost. No wasted words, front-loaded with the action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and no output schema, the description is largely complete for a simple rate retrieval tool. It covers the key purpose and cost, though it could optionally mention the response format for completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Since there are zero parameters, the baseline is 4. The description does not need to add further meaning beyond the empty schema, and it appropriately omits parameter details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves specific financial rates (Key Repo Rate and Prime Lending Rates) from Bank of Mauritius, using the verb 'Get'. It distinguishes from sibling tools like forex_rates and macro_indicators by naming unique resources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool vs alternatives. The cost mention hints at a condition, but there is no explicit context about when it is appropriate or when to choose another tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
monitor_checkAInspect
Check monitoring subscription status. Re-screens the subscribed wallet/entity against the latest database and returns current alert state. Costs $0.003 USDC via x402.
| Name | Required | Description | Default |
|---|---|---|---|
| subscription_id | Yes | Subscription ID (MON-XXXXXX-XXXXXXXX) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations, so description must disclose behavior. States it re-screens and returns alert state, but term 're-screens' could imply mutation. Cost hints at action but ambiguity remains about side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose, then cost. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, so return details are vague ('current alert state'). Adequate for a simple check but missing formal response structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers 100% of parameters with description. The tool description adds no additional parameter context beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states verb 'Check' and resource 'monitoring subscription status'. It distinguishes from sibling 'monitor_subscribe' by implying it checks an existing subscription.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Describes cost and that it re-screens, implying usage after subscription. No explicit when-to-use or alternatives, but the context is inferred from the sibling tool name.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
monitor_subscribeAInspect
Subscribe a wallet or entity for 30-day continuous monitoring. If the target appears on any sanctions, PEP, or crypto blacklist, the status flips to 'alerted'. Optional webhook for push notifications. Costs $0.010 USDC via x402.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Monitor a wallet address or entity name | |
| chain | No | Chain hint for wallets (default: auto) | |
| label | No | Your internal reference label | |
| value | Yes | The wallet address or entity name to monitor | |
| webhook_url | No | POST alert notifications to this URL when status changes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Describes triggers (sanctions, PEP, crypto blacklist), duration, optional webhook, and cost. Adds value beyond schema without annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three clear, front-loaded sentences with no wasted words. All sentences add value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers key aspects: subscription duration, alert triggers, webhook, cost. Missing return info but not critical for a subscribe tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so description adds no extra parameter semantics. Baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it subscribes a wallet or entity for 30-day monitoring, distinguishing it from sibling tools like monitor_check and compliance_watchlist.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implied usage for starting continuous monitoring, but no explicit guidance on when not to use or alternatives provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
network_scanAInspect
Entity relationship intelligence: finds all watchlist hits, traverses entity relation graph, screens connected entities, produces risk network map with composite scoring per node. Replaces 10-20 API calls + manual graph analysis. Costs $0.015 USDC via x402.
| Name | Required | Description | Default |
|---|---|---|---|
| depth | No | Graph traversal depth: 1 (direct connections) or 2 (connections of connections). Default: 1 | |
| query | Yes | Entity name to scan (e.g. 'Global Capital Ltd') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions cost and output (risk network map), but does not disclose whether the operation is read-only, authentication requirements, or limits (e.g., max depth beyond 2). Adequate but incomplete.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, front-loaded with purpose, and includes cost and efficiency details. No redundant information, though it could be slightly more structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has no output schema, but the description compensates by describing the output as a risk network map with composite scoring. It lacks details on format or structure but is reasonably complete for initial understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% for both parameters (query and depth with default). The description adds no additional meaning beyond what the schema provides, so baseline of 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly and specifically states the tool's function: it finds watchlist hits, traverses entity relation graph, screens connected entities, and produces a risk network map. It distinguishes itself from sibling tools like compliance_watchlist by emphasizing graph traversal and composite scoring.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states it replaces 10-20 API calls and manual graph analysis, implying when to use it over individual queries. However, it does not mention when not to use it or provide direct alternatives among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
stock_marketBInspect
Get SEMDEX and SEM indices from Stock Exchange of Mauritius. Costs $0.001 USDC via x402.
| Name | Required | Description | Default |
|---|---|---|---|
| index | No | Optional index name (semdex, sem10, demex). Omit for all. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description bears full burden. It discloses the cost ($0.001 USDC via x402) as a behavioral trait, which is helpful. However, it lacks details on rate limits, authentication, or potential failures.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences covering purpose and cost, no fluff. Could be slightly improved by aligning the indices mentioned with the schema, but overall efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and simple input, the description is adequate but lacks details on return format. For a straightforward index retrieval tool, it covers the essentials but could be more complete by specifying the response structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%: the parameter 'index' has a clear description listing values and default behavior. The tool description adds no additional meaning beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves specific indices from a named exchange, distinguishing it from siblings which focus on compliance, country data, etc. However, it lists 'SEMDEX and SEM indices' while the schema includes 'semdex, sem10, demex', causing slight ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. The cost mention is a constraint but does not clarify context or exclusions. Sibling tools cover different domains, but no comparative direction is given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
transaction_screenAInspect
Cross-border transaction pre-screening: checks sender + receiver against watchlists, evaluates jurisdiction risk, provides forex corridor rate, returns PROCEED/REVIEW/FLAG/BLOCK recommendation. Replaces 6 API calls. Costs $0.008 USDC via x402.
| Name | Required | Description | Default |
|---|---|---|---|
| currency_to | No | Target currency ISO code (default: MUR) | |
| sender_name | Yes | Sender entity name | |
| currency_from | No | Source currency ISO code for corridor rate | |
| receiver_name | Yes | Receiver entity name | |
| sender_country | No | Sender ISO country code (e.g. US, MU) | |
| receiver_country | No | Receiver ISO country code |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description carries the full burden. It discloses all actions (watchlist checks, risk evaluation, forex rate) and cost. It does not mention destructiveness or permissions, but the tool appears read-only, and the description is upfront about its operations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences, front-loaded with the core action, and every sentence adds value (purpose, replacements, cost). No unnecessary fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has no output schema, and the description only mentions four recommendation types without detailing the output structure (e.g., whether it's a string or an object with reasons). Given the complexity, the description should include more output details for completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% description coverage, but the tool description adds context (e.g., default for currency_to is MUR) and clarifies the tool's purpose for each parameter. This goes beyond the schema descriptions, providing additional meaning.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool pre-screens cross-border transactions, checks watchlists, evaluates jurisdiction risk, provides forex rate, and returns a recommendation. It distinguishes from siblings by explicitly noting it replaces 6 API calls.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by stating it replaces 6 API calls, suggesting it should be used instead of multiple individual queries. However, it lacks explicit when-to-use versus alternatives, though sibling names hint at the alternative individual tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
travel_rule_screenAInspect
FATF R16 Travel Rule compliance: screens both originator and beneficiary wallets, entity names, and jurisdictions in one call. Returns structured compliance packet with unique packetId that counter-parties can verify. Costs $0.005 USDC via x402.
| Name | Required | Description | Default |
|---|---|---|---|
| purpose | No | Transaction purpose | |
| amount_usd | No | Transaction amount in USD (triggers threshold check) | |
| originator_name | No | Originator entity/person name | |
| beneficiary_name | No | Beneficiary entity/person name | |
| originator_address | Yes | Originator wallet address | |
| originator_country | No | Originator ISO country code | |
| beneficiary_address | Yes | Beneficiary wallet address | |
| beneficiary_country | No | Beneficiary ISO country code |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses cost ($0.005) and return of structured packet with packetId, but no annotations exist. Does not mention side effects, rate limits, or reversibility. Adequate but limited behavioral detail.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose and key differentiators. No filler; every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers purpose, output, and cost. Missing some behavioral details like threshold trigger (implied by amount_usd description) but sufficient for basic tool usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
All 8 parameters are described with individual descriptions in the schema (100% coverage). Description adds cost and packetId output but no additional parameter-specific meaning beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool screens originator/beneficiary wallets, entity names, and jurisdictions for FATF Travel Rule compliance. Specific verb ('screens') and resource ('compliance packet') distinguished from siblings like compliance_wallet and transaction_screen.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Mentions return of packetId for verification by counterparties and cost, but does not explicitly compare to sibling tools or state when not to use this tool. Usage context is implied but not fully articulated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
weatherAInspect
Get current weather from all Mauritius Met Service stations. Costs $0.001 USDC via x402.
| Name | Required | Description | Default |
|---|---|---|---|
| station | No | Optional station name (e.g. vacoas, plaisance). Omit for all. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose behavior. It mentions a cost of $0.001 USDC via x402, which is a behavioral trait. However, it does not disclose other aspects like rate limits, idempotency, or data freshness.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two short sentences that front-load the core purpose and pricing. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, and the description does not explain the return format or structure of the weather data. While the tool is simple, the lack of output details limits completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (one parameter fully described). The description adds no extra meaning beyond what the schema provides; it only repeats the optional nature and example values.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool gets current weather from Mauritius Met Service stations, specifying the verb 'Get', resource 'current weather', and scope. It distinguishes itself from sibling tools (compliance, finance, govdata) as a unique weather tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs alternatives. No context about when to or not to use it, or how it compares to other tools on the server.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!