Skip to main content
Glama

Server Details

AMLOracle — 12-tool AML/CFT MCP: 87k sanctions names, PEP screening, adverse media, SAR/STR.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.3/5 across 12 of 12 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: screening, risk assessment, news, calendar, health, UBO, and composite KYC. No two tools overlap significantly; even related tools like sanctions_screen and sanctions_detail serve different functions (screening vs. detail lookup).

Naming Consistency3/5

Tool names follow a two-word underscore pattern, but the grammatical structure varies: some are noun phrases (adverse_media, aml_news), others combine a noun with an action verb (pep_check, ubo_lookup, watchlist_update). There is no consistent verb_noun or noun_verb convention.

Tool Count5/5

12 tools cover the core AML compliance workflow without being excessive. Each tool addresses a specific need (screening, risk, UBO, news, health) and the count feels well-scoped for the domain.

Completeness4/5

The tool set covers main AML operations: sanctions, PEP, adverse media, country/transaction risk, UBO, and a KYC bundle. Minor gaps exist, such as a dedicated customer risk rating tool or reporting functionality, but the essential lifecycle is present.

Available Tools

12 tools
adverse_mediaCInspect

Negative news screening: fraud, money laundering, corruption, sanctions. AMLR Art. 55 ongoing monitoring.

ParametersJSON Schema
NameRequiredDescriptionDefault
langNoLanguage: en or de (default: en)
nameNoPerson or company name to screen
limitNoMax articles 1-20 (default: 10)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must fully disclose behavioral traits. However, it does not indicate whether the tool is read-only, destructive, or its authentication requirements. It lacks information on output format or side effects. The description only covers the screening focus, not operational behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that front-loads the key action and topics. It is efficient with no wasted words. However, it could be slightly more structured by explicitly stating the action on the input entity (e.g., 'Screens a person or company for adverse media').

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations and output schema, the description should provide more operational context. It does not explain what the output contains (e.g., article snippets, risk scores), pagination, or rate limits. For a screening tool among many siblings, this incompleteness may hinder correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the baseline is 3. The description does not add extra meaning to parameters beyond what the schema provides. The schema descriptions for 'lang', 'name', and 'limit' are clear and adequate, so no further clarification is needed from the tool description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Negative news screening' and lists relevant topics (fraud, money laundering, corruption, sanctions). It conveys the core function, though it does not explicitly mention screening persons or companies, which is implied by the 'name' parameter. The purpose is distinguishable from siblings like 'aml_news' or 'sanctions_screen' but could be more precise.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives such as 'aml_news' or 'sanctions_screen'. There is no mention of prerequisites, context, or when not to use it. The description only states what it does, leaving the agent without decision-making support.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

aml_newsBInspect

AMLR / AMLA / AMLD6 regulatory news. Topics: general, amla, amlr, sanctions, crypto_aml, fatf, de_geldwaesche, pep, kyc, str.

ParametersJSON Schema
NameRequiredDescriptionDefault
langNoLanguage: en or de (default: en)
limitNoMax articles 1-20 (default: 10)
topicNoTopic: general, amla, amlr, sanctions, crypto_aml, fatf, de_geldwaesche, pep, kyc, str
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears full responsibility for behavioral disclosure. However, it only describes content and topics, with no mention of side effects, rate limits, or whether the operation is read-only. This is insufficient for a safe invocation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise—a single sentence plus a list of topics. It is front-loaded with the tool's purpose. However, the list is inline rather than structured (e.g., bullet points), which slightly reduces clarity. No redundant words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool is simple, but the description lacks information about the return value (e.g., format of news articles). With no output schema, the agent must guess the output structure. The description covers inputs adequately but not outputs, making it minimally viable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema coverage is 100% with descriptions for all three parameters. The description reiterates the topic list but adds no new semantics beyond what the schema already provides. Baseline score of 3 applies since schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly indicates the tool provides news about AMLR, AMLA, and AMLD6 regulatory topics. It lists specific topics, distinguishing it from sibling tools like adverse_media or country_risk. The verb 'get' is implied, making it specific enough for an agent.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide any guidance on when to use this tool compared to alternatives. No when-to-use or when-not-to-use advice is given, leaving the agent to infer usage from the topic list alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

amlr_calendarBInspect

AMLR compliance milestones. Key: 1 July 2026 full application, €10k cash limit, crypto Travel Rule €0.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It does not disclose whether the tool is read-only, has side effects, or any behavioral constraints. The brief text implies static information, but lacks explicit transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise with a single sentence. It front-loads the key information. However, it could be slightly more structured or include additional context like output format.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no parameters or output schema, the description is minimally viable. It provides the core subject but lacks details on how the milestones are presented (e.g., list, text) or any additional metadata. Adequate but not comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, so baseline is 4. The description adds meaning beyond the empty schema by specifying the content (AMLR milestones and key dates). No further parameter documentation is needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it is about 'AMLR compliance milestones,' which is specific and distinguishes it from sibling tools like 'aml_news' or 'sanctions_screen' that deal with different compliance aspects. However, it lacks an explicit verb like 'get' or 'list,' which slightly reduces clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description only gives a brief fact about milestones, leaving the agent to infer usage context. It does not mention prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

country_riskAInspect

Country risk assessment: FATF Blacklist, Greylist, EU High-Risk. AMLR Art. 16+18.

ParametersJSON Schema
NameRequiredDescriptionDefault
countryNoCountry name or ISO code e.g. 'Iran', 'Nigeria', 'RU'
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided; description mentions lists and regulation but omits behavioral traits like read-only nature, auth requirements, or error handling. Needs more detail.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise with no wasted words. Two fragments effectively convey core purpose and references.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple single-parameter tool, but lacks output description or behavior. With no output schema, could detail expected return format or error states.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Parameter 'country' schema has 100% coverage with examples. Description adds context about the type of risk assessment (FATF, EU), enhancing schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description specifies country risk assessment using FATF Blacklist, Greylist, and EU High-Risk, with regulatory reference. Clearly distinguishes from sibling tools like sanctions_screen or pep_check.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage for AML country risk evaluation but lacks explicit guidance on when to use vs siblings or when not to use. No alternatives mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

health_checkAInspect

AMLOracle server status, backend checks, and watchlist cache status.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Given no annotations, the description partially discloses behavioral traits (e.g., checks multiple components) but does not explicitly state it is read-only, has no side effects, or whether it makes external calls.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with key purpose, no waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no params, no annotations, and no output schema, the description is adequate but lacks detail on return values and usage context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are no parameters; the schema coverage is 100%. The description adds meaning by hinting at the output (status, backend, cache) but could detail return format.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool checks AMLOracle server status, backend checks, and watchlist cache status, distinguishing it from sibling tools that focus on specific AML data operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. It does not specify scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

kyc_bundleBInspect

Full KYC in one call: Sanctions + PEP + Adverse Media + Country Risk. Returns overall risk level and AMLR decision.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameNoPerson or company name
countryNoCountry of origin/residence (optional)
entity_typeNoperson or company (default: person)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description only mentions the return values (risk level, AMLR decision). It does not disclose any behavioral traits such as side effects, required permissions, or data handling beyond the return.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, 16 words, front-loaded with the core functionality and no unnecessary information. Every word adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a multi-check KYC tool, the description only hints at outputs but lacks details on how risk level or AMLR decision are calculated or interpreted. No output schema is provided.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already documents parameters adequately. The description adds no extra meaning beyond listing components, which is already clear from the purpose.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it bundles Sanctions, PEP, Adverse Media, and Country Risk into one call and returns overall risk level and AMLR decision. This distinguishes it from sibling tools that focus on individual checks.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies a consolidated use case ('Full KYC in one call') but does not explicitly say when to prefer this over individual sibling tools or provide any exclusions or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pep_checkBInspect

PEP (Politically Exposed Person) check via Wikidata. AMLR Art. 22 enhanced CDD for PEPs.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameNoFull name e.g. 'Vladimir Putin', 'Angela Merkel'
countryNoCountry to narrow search (optional)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It only mentions the data source (Wikidata) but omits critical details like read-only nature, error handling, or whether results are cached. This is insufficient for an agent to anticipate behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that immediately states the purpose and relevant regulation. Every word adds value, making it highly concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Without an output schema, the description fails to explain what the tool returns (e.g., boolean, risk level, list). The simple parameter set is covered, but the omission of result format leaves the agent uncertain about how to use the output.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear parameter descriptions for 'name' and 'country'. The tool description adds no extra semantic value beyond the schema, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs a PEP check via Wikidata, referencing AMLR Art. 22 enhanced CDD, which distinguishes it from sibling tools like sanctions_screen or adverse_media.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

There is no guidance on when to use this tool versus alternatives. It does not mention when not to use it or list other tools for different checks, leaving the agent to infer context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sanctions_detailBInspect

Full entity record for sanctions hit. Returns UN XML detail, aliases, nationality, designation.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameNoName to look up
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It indicates a read operation ('returns') and lists returned data types, but does not disclose permissions, rate limits, or side effects. The lack of any side-effect mention is acceptable for a read tool, but more detail would improve transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that conveys key information without redundancy. Every word adds value, making it concise and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description lists return fields (UN XML detail, aliases, nationality, designation), which is helpful. However, it does not clarify if the input name must be exact or if partial matches are supported. Overall, it is mostly complete for a detail retrieval tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with one parameter 'name' described as 'Name to look up'. The description adds no extra parameter semantics beyond the schema, such as format or case sensitivity. Baseline 3 is appropriate when schema fully covers parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states it returns a 'full entity record for sanctions hit', specifying fields like UN XML detail, aliases, nationality, designation. This clearly indicates the tool retrieves detailed sanctions information, distinguishing it from sibling tools like 'sanctions_screen' which likely performs screening. However, it could explicitly contrast with siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like 'sanctions_screen' or 'adverse_media'. The description does not specify prerequisites or scenarios, leaving the agent to infer usage from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sanctions_screenAInspect

Screen any name against EU FSF (14k), OFAC SDN (69k), UN SC (3k), Interpol (13k) watchlists. AMLR Art. 35.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameNoName to screen e.g. 'Gazprom', 'Ivan Petrov'
thresholdNoMatch threshold 0.7-1.0 (default: 0.85)
entity_typeNoperson or company (default: any)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations, so the description must cover behavior. It lists the watchlists but does not mention side effects, rate limits, or output format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise: one clear action sentence plus a legal reference, no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and three parameters, the description lacks details on what the tool returns (e.g., match list or score), leaving some gaps for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions for all three parameters; the tool description adds no additional parameter context beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool screens names against multiple specific watchlists (EU, OFAC, UN, Interpol) with list sizes, differentiating it from siblings like pep_check or adverse_media.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Describes what the tool does but provides no explicit guidance on when to use it vs. alternatives such as sanctions_detail or watchlist_update.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

transaction_riskCInspect

AML transaction risk scoring: amount, corridor, sender screening. AMLR Art. 35+82.

ParametersJSON Schema
NameRequiredDescriptionDefault
amountNoTransaction amount
purposeNoTransaction purpose/description
currencyNoCurrency code (default: EUR)
sender_nameNoSender name (screened against watchlists)
origin_countryNoSender country
destination_countryNoReceiver country
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden but only mentions regulatory references (AMLR Art. 35+82) without describing side effects, determinism, or the output format. The agent is left uninformed about whether the tool is read-only or what the result looks like.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise (one sentence plus a reference), but for a tool with 6 parameters and many siblings, it sacrifices necessary information for brevity. It is not well-structured and lacks detail.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (6 parameters, no annotations, no output schema, many siblings), the description is incomplete. It fails to explain the risk scoring logic, output structure, or how it integrates with other AML tools, leaving significant gaps for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

All 6 parameters are described in the input schema (100% coverage), so the description adds limited value. It mentions 'sender screening' which corresponds to sender_name, and 'corridor' which is implicit from origin/destination countries, but does not provide new semantic depth beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the tool as performing AML transaction risk scoring based on amount, corridor, and sender screening, which distinguishes it from sibling tools like sanctions_screen or country_risk. However, the term 'corridor' is not explicitly linked to origin/destination country parameters, slightly reducing precision.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives such as sanctions_screen or pep_check. Given the number of sibling tools, this omission makes it harder for an agent to select the correct tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ubo_lookupBInspect

Ultimate Beneficial Owner (UBO) identification via GLEIF LEI ownership register. AMLR Art. 42.

ParametersJSON Schema
NameRequiredDescriptionDefault
leiNoLEI code (alternative to company name)
companyNoCompany name e.g. 'Volkswagen AG'
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must convey behavioral traits. It mentions the data source (GLEIF) but fails to state that it is a read-only lookup, or any prerequisites, limitations, or error behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence without redundant information. It efficiently conveys the core purpose and a key reference.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Lacks details about the output format, which is not provided via an output schema. Also does not mention that both parameters are optional (implied but not stated). Incomplete for a lookup tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for both parameters. The description adds the example 'Volkswagen AG' and the term 'alternative', but does not significantly enhance understanding beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool identifies Ultimate Beneficial Owners using the GLEIF LEI ownership register, with a regulatory reference. The name 'ubo_lookup' aligns, and it is distinct from sibling tools like pep_check or sanctions_screen.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives like pep_check or sanctions_screen. No exclusion criteria or context for use are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

watchlist_updateCInspect

Check watchlist freshness and reload from OpenSanctions. Use force_reload=true to refresh cache.

ParametersJSON Schema
NameRequiredDescriptionDefault
force_reloadNoForce reload all watchlists from OpenSanctions (default: false)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It only states 'reload' and 'refresh cache' without disclosing side effects, permissions, or what happens to existing data. Minimal behavioral context beyond the action itself.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences long with no extra words. The purpose is front-loaded in the first sentence. However, it may be too brief to be fully actionable, but still efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description does not explain what the tool returns, the concept of 'freshness', or how it fits with sibling tools. It leaves the agent with significant uncertainty about its usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with one boolean parameter. The description adds only a brief mention of force_reload, echoing the schema. No additional meaning or usage details beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool checks watchlist freshness and reloads from OpenSanctions, providing a specific verb and resource. However, it doesn't fully distinguish from sibling tools like 'sanctions_screen' or 'sanctions_detail', though the update action is unique.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions using force_reload=true to refresh cache but provides no guidance on when to use this tool versus alternatives or when not to use it. No exclusions or context for selecting this tool over siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources