sanctions
Server Details
Sanctions screening (EU + OFAC) for KYC/AML — fuzzy person/entity match, IČO check
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- martinhavel/cz-agents-mcp
- GitHub Stars
- 0
- Server Listing
- cz-agents-mcp
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 5 of 5 tools scored.
Each tool has a clear, distinct purpose: check_ico is for Czech IČO lookup, search_entity and search_person are for name-based searches with different targets, get_listing retrieves a specific record, and list_recent_updates is for monitoring changes. No significant overlap.
All tool names follow a consistent verb_noun pattern in snake_case (check_ico, get_listing, list_recent_updates, search_entity, search_person), making the set predictable and easy to navigate.
With 5 tools covering the core operations of sanctions checking (search, retrieve, monitor), the count is well-scoped and each tool serves a necessary, non-redundant function.
The tool set provides complete coverage for common sanctions workflows: direct ID lookup (check_ico, get_listing), name-based search (search_entity, search_person), and monitoring updates (list_recent_updates). No obvious gaps for typical use cases.
Available Tools
5 toolscheck_icoARead-onlyInspect
Check whether a Czech IČO (or any company by IČO) appears on sanctions lists. Direct exact-ID lookup; pass name to also fuzzy-match if no direct hit.
| Name | Required | Description | Default |
|---|---|---|---|
| ico | Yes | Czech IČO (7-8 digits) or comparable national company ID. | |
| name | No | Optional company name for fuzzy fallback if IČO not directly listed. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It explains the two lookup modes (exact and fuzzy) but does not describe the response format (e.g., boolean, list), error handling, or what constitutes 'appears on sanctions lists'. This is a gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two crisp sentences, front-loaded with core purpose, second sentence adding optional behavior. No waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given simple tool with no output schema or annotations, the description covers basic operation but lacks details on return values and edge cases, making it somewhat incomplete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with clear params. The tool description essentially restates the parameter descriptions without adding new meaning. Baseline 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool checks presence on sanctions lists by IČO, with optional fuzzy-match by name. This is distinct from sibling tools (get_listing, list_recent_updates, search_entity, search_person) which serve different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description explains when to use the name parameter (fuzzy fallback if direct lookup fails), but does not explicitly compare to alternatives like search_entity or state when not to use this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_listingARead-onlyInspect
Retrieve the full record for a single sanctions listing by its ID (format: ${source}:${source_list_id}, e.g. "ofac:12345" or "eu:EU.123.789").
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Internal listing ID, e.g. "ofac:12345". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description states 'retrieve' implying read-only behavior. No mention of side effects, rate limits, or access requirements. Adequate for a simple lookup.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence containing purpose, format, and examples. No extraneous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool has only one parameter and no output schema. Description adequately explains ID format and examples. Could clarify what 'full record' includes, but overall sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers id parameter 100%. Description adds ID format and concrete examples (e.g., 'ofac:12345') beyond schema's generic description, providing practical guidance.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the verb 'Retrieve' and resource 'full record for a single sanctions listing by its ID'. Provides ID format and examples. Distinguishes from sibling search tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies use when exact ID is known but does not explicitly state when to use vs alternatives like search_entity. No guidance on when not to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_recent_updatesARead-onlyInspect
List sanctions added/removed/modified since a given date. Use for daily monitoring against a watchlist.
| Name | Required | Description | Default |
|---|---|---|---|
| since | Yes | ISO date or datetime, e.g. "2026-04-01" or "2026-04-01T00:00:00Z". | |
| source | No | Optional source filter. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description indicates read-only nature (list of changes) but does not address potential behaviors like pagination, rate limits, or authentication requirements. No annotations provided to compensate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no unnecessary words, front-loaded with the main action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple list tool with two well-documented parameters; could optionally describe return format or pagination but not essential.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description does not add meaningful information beyond what the schema already provides for the two parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb 'List' and resource 'sanctions added/removed/modified' and clearly distinguishes from sibling tools like search_entity or check_ico.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Use for daily monitoring against a watchlist' which gives clear context, though it does not mention when not to use or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_entityBRead-onlyInspect
Fuzzy-search a sanctioned entity (company, organization) by name. Optional country narrows results.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Company / organization name. | |
| limit | No | Max results (default 20). | |
| country | No | Country filter (name or ISO code). | |
| threshold | No | Min confidence (default 80). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose behavioral traits. It mentions 'fuzzy-search' but does not explain what that entails (e.g., matching algorithm, ranking). It also omits details on result structure, pagination, or how threshold affects output. This is insufficient for a search tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise—a single sentence that front-loads the core purpose. Every word is necessary; no fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 4 parameters and no output schema, the description should explain return format, confidence interpretation, and result behavior. It fails to do so, leaving agents with gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the description's additional value is limited. It adds context that search is fuzzy and country narrows results, which is helpful but not essential. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'fuzzy-search' and the resource 'sanctioned entity (company, organization)'. It also mentions optional filtering by country, which distinguishes it from sibling tools like search_person that likely search for individuals.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for searching companies/organizations, but does not explicitly state when to use this tool versus alternatives like search_person or check_ico. No exclusions or when-not-to-use guidance is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_personARead-onlyInspect
Fuzzy-search a sanctioned person by name across all loaded lists. Optional date of birth and nationality narrow results. Returns matches with confidence scores (0-100). 100 = exact ID match, 80+ = strong fuzzy match, lower = review needed.
| Name | Required | Description | Default |
|---|---|---|---|
| dob | No | YYYY or YYYY-MM-DD. Optional, narrows matches. | |
| name | Yes | Full name. Cyrillic / Arabic / Chinese tolerated; transliteration applied. | |
| limit | No | Max results (default 20). | |
| threshold | No | Min confidence to include in results (default 80). | |
| nationality | No | Country name or ISO code. Optional. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description fully bears the burden. It discloses fuzzy search behavior, confidence score ranges (0-100 with thresholds 80+ and 100), and the scope (all loaded lists). However, it omits potential traits like idempotency, authentication, or rate limits, but these are less critical for a search tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of three concise sentences without any fluff. The main purpose is front-loaded, and each sentence adds unique information (search scope, optional filters, output explanation).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (5 parameters, no output schema), the description covers core aspects: search behavior, optional filters, and output confidence scores. It does not explicitly mention limit/threshold parameters but the schema covers them. Minor gaps like pagination are absent but acceptable for a search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Since schema coverage is 100%, the baseline is 3. The description adds value by explaining how parameters like date of birth and nationality narrow results, and by defining the confidence score semantics. It does not repeat all parameter details but provides useful context beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: fuzzy-search a sanctioned person by name across all loaded lists. It specifies the verb (search), resource (sanctioned person), and scope (all lists), distinguishing it from sibling tools like search_entity which likely search broader entities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for searching sanctioned persons but does not explicitly state when to use this tool versus alternatives like search_entity or check_ico. It lacks guidance on when not to use it or mention of specific preconditions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.