Skip to main content
Glama

Server Details

Sanctions screening (EU + OFAC) for KYC/AML — fuzzy person/entity match, IČO check

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
martinhavel/cz-agents-mcp
GitHub Stars
0
Server Listing
cz-agents-mcp

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 5 of 5 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clear, distinct purpose: check_ico is for Czech IČO lookup, search_entity and search_person are for name-based searches with different targets, get_listing retrieves a specific record, and list_recent_updates is for monitoring changes. No significant overlap.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern in snake_case (check_ico, get_listing, list_recent_updates, search_entity, search_person), making the set predictable and easy to navigate.

Tool Count5/5

With 5 tools covering the core operations of sanctions checking (search, retrieve, monitor), the count is well-scoped and each tool serves a necessary, non-redundant function.

Completeness5/5

The tool set provides complete coverage for common sanctions workflows: direct ID lookup (check_ico, get_listing), name-based search (search_entity, search_person), and monitoring updates (list_recent_updates). No obvious gaps for typical use cases.

Available Tools

5 tools
check_icoA
Read-only
Inspect

Check whether a Czech IČO (or any company by IČO) appears on sanctions lists. Direct exact-ID lookup; pass name to also fuzzy-match if no direct hit.

ParametersJSON Schema
NameRequiredDescriptionDefault
icoYesCzech IČO (7-8 digits) or comparable national company ID.
nameNoOptional company name for fuzzy fallback if IČO not directly listed.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It explains the two lookup modes (exact and fuzzy) but does not describe the response format (e.g., boolean, list), error handling, or what constitutes 'appears on sanctions lists'. This is a gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two crisp sentences, front-loaded with core purpose, second sentence adding optional behavior. No waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given simple tool with no output schema or annotations, the description covers basic operation but lacks details on return values and edge cases, making it somewhat incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with clear params. The tool description essentially restates the parameter descriptions without adding new meaning. Baseline 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool checks presence on sanctions lists by IČO, with optional fuzzy-match by name. This is distinct from sibling tools (get_listing, list_recent_updates, search_entity, search_person) which serve different purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description explains when to use the name parameter (fuzzy fallback if direct lookup fails), but does not explicitly compare to alternatives like search_entity or state when not to use this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_listingA
Read-only
Inspect

Retrieve the full record for a single sanctions listing by its ID (format: ${source}:${source_list_id}, e.g. "ofac:12345" or "eu:EU.123.789").

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesInternal listing ID, e.g. "ofac:12345".
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description states 'retrieve' implying read-only behavior. No mention of side effects, rate limits, or access requirements. Adequate for a simple lookup.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence containing purpose, format, and examples. No extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Tool has only one parameter and no output schema. Description adequately explains ID format and examples. Could clarify what 'full record' includes, but overall sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers id parameter 100%. Description adds ID format and concrete examples (e.g., 'ofac:12345') beyond schema's generic description, providing practical guidance.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the verb 'Retrieve' and resource 'full record for a single sanctions listing by its ID'. Provides ID format and examples. Distinguishes from sibling search tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies use when exact ID is known but does not explicitly state when to use vs alternatives like search_entity. No guidance on when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_recent_updatesA
Read-only
Inspect

List sanctions added/removed/modified since a given date. Use for daily monitoring against a watchlist.

ParametersJSON Schema
NameRequiredDescriptionDefault
sinceYesISO date or datetime, e.g. "2026-04-01" or "2026-04-01T00:00:00Z".
sourceNoOptional source filter.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description indicates read-only nature (list of changes) but does not address potential behaviors like pagination, rate limits, or authentication requirements. No annotations provided to compensate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no unnecessary words, front-loaded with the main action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple list tool with two well-documented parameters; could optionally describe return format or pagination but not essential.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description does not add meaningful information beyond what the schema already provides for the two parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb 'List' and resource 'sanctions added/removed/modified' and clearly distinguishes from sibling tools like search_entity or check_ico.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states 'Use for daily monitoring against a watchlist' which gives clear context, though it does not mention when not to use or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_entityB
Read-only
Inspect

Fuzzy-search a sanctioned entity (company, organization) by name. Optional country narrows results.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesCompany / organization name.
limitNoMax results (default 20).
countryNoCountry filter (name or ISO code).
thresholdNoMin confidence (default 80).
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits. It mentions 'fuzzy-search' but does not explain what that entails (e.g., matching algorithm, ranking). It also omits details on result structure, pagination, or how threshold affects output. This is insufficient for a search tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise—a single sentence that front-loads the core purpose. Every word is necessary; no fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 4 parameters and no output schema, the description should explain return format, confidence interpretation, and result behavior. It fails to do so, leaving agents with gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the description's additional value is limited. It adds context that search is fuzzy and country narrows results, which is helpful but not essential. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'fuzzy-search' and the resource 'sanctioned entity (company, organization)'. It also mentions optional filtering by country, which distinguishes it from sibling tools like search_person that likely search for individuals.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for searching companies/organizations, but does not explicitly state when to use this tool versus alternatives like search_person or check_ico. No exclusions or when-not-to-use guidance is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_personA
Read-only
Inspect

Fuzzy-search a sanctioned person by name across all loaded lists. Optional date of birth and nationality narrow results. Returns matches with confidence scores (0-100). 100 = exact ID match, 80+ = strong fuzzy match, lower = review needed.

ParametersJSON Schema
NameRequiredDescriptionDefault
dobNoYYYY or YYYY-MM-DD. Optional, narrows matches.
nameYesFull name. Cyrillic / Arabic / Chinese tolerated; transliteration applied.
limitNoMax results (default 20).
thresholdNoMin confidence to include in results (default 80).
nationalityNoCountry name or ISO code. Optional.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully bears the burden. It discloses fuzzy search behavior, confidence score ranges (0-100 with thresholds 80+ and 100), and the scope (all loaded lists). However, it omits potential traits like idempotency, authentication, or rate limits, but these are less critical for a search tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of three concise sentences without any fluff. The main purpose is front-loaded, and each sentence adds unique information (search scope, optional filters, output explanation).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (5 parameters, no output schema), the description covers core aspects: search behavior, optional filters, and output confidence scores. It does not explicitly mention limit/threshold parameters but the schema covers them. Minor gaps like pagination are absent but acceptable for a search tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Since schema coverage is 100%, the baseline is 3. The description adds value by explaining how parameters like date of birth and nationality narrow results, and by defining the confidence score semantics. It does not repeat all parameter details but provides useful context beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: fuzzy-search a sanctioned person by name across all loaded lists. It specifies the verb (search), resource (sanctioned person), and scope (all lists), distinguishing it from sibling tools like search_entity which likely search broader entities.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for searching sanctioned persons but does not explicitly state when to use this tool versus alternatives like search_entity or check_ico. It lacks guidance on when not to use it or mention of specific preconditions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.