Skip to main content
Glama

ares

Server Details

Czech Business Register (ARES): company lookup by IČO, full-text, transparent accounts

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
martinhavel/cz-agents-mcp
GitHub Stars
0
Server Listing
cz-agents-mcp

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 7 of 7 tools scored. Lowest: 3.2/5.

Server CoherenceA
Disambiguation5/5

Each tool has a distinct and clear purpose with no overlap. For example, check_vat_payer verifies VAT registration, get_bank_accounts retrieves bank details, get_history provides historical records, get_statutaries lists statutory bodies, lookup_by_ico fetches company records by ID, search_companies performs name-based searches, and validate_dic validates VAT IDs. The descriptions explicitly differentiate their functions, eliminating any ambiguity.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern using snake_case, such as check_vat_payer, get_bank_accounts, lookup_by_ico, and validate_dic. This uniformity makes the tool set predictable and easy to understand, with no deviations in naming conventions across the seven tools.

Tool Count5/5

With 7 tools, the count is well-scoped for the server's purpose of Czech company data and compliance checks. Each tool serves a specific function, such as retrieval, validation, or search, without redundancy. This number is typical for a focused domain like business registry access, ensuring comprehensive coverage without being overwhelming.

Completeness5/5

The tool set provides complete coverage for accessing and verifying Czech company data, including key operations like lookup by ID, search by name, VAT validation, bank account retrieval, historical records, and statutory body information. There are no obvious gaps; it supports core workflows such as due diligence, compliance, and invoice verification effectively.

Available Tools

7 tools
check_vat_payerAInspect

Check whether a Czech company is a registered VAT payer (plátce DPH). If yes, returns DIČ, financial office, and any transparent bank accounts (payment details).

ParametersJSON Schema
NameRequiredDescriptionDefault
icoYesCzech IČO (7-8 digits).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses the tool's behavior by stating it checks VAT payer status and returns specific data if yes, but it does not mention error handling, rate limits, authentication needs, or what happens if the company is not a VAT payer. This leaves gaps in behavioral context for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by return details, with no wasted words. It is appropriately sized for a single-parameter tool, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, no output schema, and a simple input schema, the description is moderately complete. It covers the purpose and return data but lacks details on behavioral aspects like errors or edge cases. For a tool with this complexity, it is adequate but has clear gaps in providing full context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'ico' parameter documented as 'Czech IČO (7-8 digits).' The description does not add any meaning beyond this, such as format examples or validation rules. Since schema coverage is high, the baseline score of 3 is appropriate, as the description does not compensate but also does not detract.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Check whether a Czech company is a registered VAT payer') and the resource ('Czech company'), distinguishing it from siblings like 'lookup_by_ico' or 'validate_dic' by focusing on VAT payer status and associated details. It specifies the return data (DIČ, financial office, transparent bank accounts), making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for verifying VAT payer status, but it does not explicitly state when to use this tool versus alternatives like 'validate_dic' (which might check DIČ validity) or 'lookup_by_ico' (which could provide general company info). No exclusions or prerequisites are mentioned, leaving some ambiguity in context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_bank_accountsAInspect

Get transparent bank accounts published for this company (only available for VAT-registered subjects). Useful to verify payment details on an invoice match the company.

ParametersJSON Schema
NameRequiredDescriptionDefault
icoYesCzech IČO (7-8 digits).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses the VAT-registered requirement constraint and the verification use case, but doesn't mention potential limitations like rate limits, authentication needs, error conditions, or what format the bank account data returns. Adequate but with clear behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste - first states purpose and constraint, second provides practical usage context. Every word earns its place, and the description is appropriately sized for a single-parameter tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple lookup tool with one parameter and no output schema, the description provides good contextual completeness - purpose, constraint, and use case are clear. However, without annotations or output schema, it could benefit from mentioning what the return data looks like or any common failure modes.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents the single 'ico' parameter. The description doesn't add any parameter-specific information beyond what the schema provides, maintaining the baseline score when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get transparent bank accounts'), resource ('for this company'), and scope ('only available for VAT-registered subjects'). It distinguishes this tool from siblings by focusing on bank account retrieval rather than VAT status checking, history retrieval, or company searching.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use ('Useful to verify payment details on an invoice match the company') and includes a critical exclusion ('only available for VAT-registered subjects'), providing clear context for appropriate usage versus alternatives like check_vat_payer or lookup_by_ico.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_historyBInspect

Get historical record of a company (previous names, registered address changes, trade license history). Useful for due diligence.

ParametersJSON Schema
NameRequiredDescriptionDefault
icoYesCzech IČO (7-8 digits).
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the tool retrieves historical data, implying it's a read-only operation, but it doesn't specify any behavioral traits such as authentication requirements, rate limits, error handling, or what the output format looks like (e.g., structured data or raw text). This leaves significant gaps for an agent to understand how to invoke it effectively.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded, consisting of two sentences: one stating the purpose and one providing usage context. There's no wasted text, and it efficiently conveys key information without unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no output schema, no annotations), the description is moderately complete. It covers the purpose and hints at usage but lacks details on behavioral aspects and output. For a simple read tool, this is adequate but leaves room for improvement in guiding the agent fully.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'ico' parameter clearly documented as 'Czech IČO (7-8 digits).' The description doesn't add any additional meaning beyond this, such as examples or constraints not in the schema. With high schema coverage, the baseline score of 3 is appropriate, as the schema handles the parameter documentation adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Get') and resource ('historical record of a company'), and it elaborates on what the historical record includes (previous names, registered address changes, trade license history). However, it doesn't explicitly distinguish this tool from its siblings (e.g., 'lookup_by_ico' or 'search_companies'), which could provide overlapping or related information.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage guidance by stating it's 'Useful for due diligence,' which suggests a context for when to use this tool. However, it doesn't offer explicit alternatives (e.g., when to use this vs. 'lookup_by_ico' or 'search_companies') or clear exclusions, leaving some ambiguity in tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_statutariesAInspect

Get current statutory body (jednatelé, představenstvo, etc.) of a Czech company — who can legally act on its behalf. Returns active members only (with valid zápis, not yet removed). Essential for due diligence and compliance checks.

ParametersJSON Schema
NameRequiredDescriptionDefault
icoYesCzech IČO (7-8 digits).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: it returns 'active members only (with valid zápis, not yet removed)', clarifying the scope and filtering logic. It also implies the tool is read-only (no destructive actions mentioned) and specifies the domain (Czech companies), though it doesn't cover aspects like rate limits, error handling, or authentication needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the core purpose, the second adds critical behavioral details (active members only), and the third provides usage context. Every sentence earns its place with no redundant or vague language, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (1 parameter, no output schema, no annotations), the description is largely complete. It covers purpose, behavioral scope (active members), and usage context. However, without an output schema, it doesn't describe the return format (e.g., structure of statutory body data), which is a minor gap for a tool focused on data retrieval.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage (the 'ico' parameter is documented as 'Czech IČO (7-8 digits)'), so the baseline is 3. The description doesn't add any parameter-specific information beyond what the schema provides, such as examples or validation rules, but it contextually reinforces that the tool operates on Czech companies, which aligns with the IČO parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get current statutory body') and resource ('Czech company'), explaining what statutory bodies are ('jednatelé, představenstvo, etc.') and their legal significance ('who can legally act on its behalf'). It distinguishes from siblings by focusing on active statutory members rather than VAT status, bank accounts, history, or general company lookup.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Essential for due diligence and compliance checks'), implying it's for legal/regulatory verification. However, it doesn't explicitly state when not to use it or name specific alternatives among the sibling tools, such as when needing historical data (get_history) or VAT information (check_vat_payer).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lookup_by_icoAInspect

Get a single Czech company record by its IČO (8-digit Business ID). Returns official name, registered address, legal form, VAT ID (DIČ), founding date, and trade license activities. Returns null if IČO is not found in ARES.

ParametersJSON Schema
NameRequiredDescriptionDefault
icoYesCzech IČO — 7 or 8 digits. Examples: "27074358", "61388581". Auto-validated with MOD11 checksum.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It clearly discloses key behavioral traits: it's a read operation (implied by 'Get'), returns structured company data, handles null responses for invalid IČOs, and specifies the data source (ARES). It doesn't mention rate limits, authentication needs, or error handling, but covers the essential behavior adequately for a lookup tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise and front-loaded: the first sentence states the core purpose, the second lists returned fields, and the third covers edge case behavior. Every sentence earns its place with no wasted words, making it easy for an agent to quickly understand the tool's function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter lookup tool with no output schema, the description provides excellent completeness: it specifies what data is returned, the null case behavior, and the data source. The only minor gap is not explicitly stating the response format (though implied as structured data). Given the tool's simplicity and lack of annotations, this is nearly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents the single 'ico' parameter with format examples and validation details. The description adds no additional parameter information beyond what's in the schema. Baseline 3 is appropriate when the schema does all the parameter documentation work.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get a single Czech company record'), identifies the resource ('by its IČO'), and distinguishes it from siblings by specifying it returns a single record rather than searching or validating. It explicitly mentions the exact data fields returned (name, address, legal form, etc.), which differentiates it from tools like 'check_vat_payer' or 'get_bank_accounts'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: when you need to retrieve a complete company record by its IČO. It implicitly suggests alternatives by mentioning it returns null if not found, implying 'search_companies' might be better for unknown IČOs. However, it doesn't explicitly state when NOT to use it or name specific sibling alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_companiesAInspect

Full-text search ARES by company name or other filters. Useful when the user knows the name but not the IČO. Returns up to 100 results with IČO, name, and address.

ParametersJSON Schema
NameRequiredDescriptionDefault
pscNoFilter by postal code (PSČ).
cityNoFilter by city (nazev obce).
pocetNoMax results to return (1-100, default 10).
queryNoPartial or full company name (obchodní jméno).
startNoPagination offset (default 0).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It reveals important behavioral traits: the tool returns up to 100 results with specific fields (IČO, name, address), and implies it's a search/read operation. However, it doesn't disclose rate limits, authentication requirements, error conditions, or whether this is a read-only operation (though search implies read).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly sized with two sentences that each earn their place. The first sentence states the purpose and scope, the second provides critical behavioral information (result limit and return fields). There's zero wasted language and it's front-loaded with the most important information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 5 parameters with 100% schema coverage but no annotations and no output schema, the description does an adequate job for a search tool. It explains the core use case and return format, but doesn't address authentication, error handling, or provide examples. For a tool with no output schema, mentioning the return fields is helpful but could be more comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 5 parameters thoroughly. The description adds minimal value beyond the schema - it mentions 'company name or other filters' which aligns with the query parameter and hints at filtering capabilities, but doesn't provide additional semantic context about parameter interactions or usage patterns.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Full-text search ARES by company name or other filters'), identifies the resource ('companies'), and distinguishes from siblings by mentioning it's useful when the user knows the name but not the IČO (unlike lookup_by_ico which requires IČO). It provides a precise verb+resource combination with clear differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('when the user knows the name but not the IČO'), which implicitly suggests an alternative (lookup_by_ico for when IČO is known). However, it doesn't explicitly state when NOT to use it or mention other filtering alternatives beyond the basic use case.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

validate_dicAInspect

Validate a Czech DIČ (VAT ID). Format check: "CZ" + 8-10 digits. For 8-digit tail (legal entities) also runs MOD11 checksum against the embedded IČO.

ParametersJSON Schema
NameRequiredDescriptionDefault
dicYesCzech DIČ — e.g., "CZ26168685". Whitespace and case tolerated.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's behavior: format checking, length requirements, checksum validation for 8-digit tails, and tolerance for whitespace and case. However, it doesn't mention error handling, response format, or performance characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the purpose and format rules, the second adds checksum details. Every sentence adds value without redundancy, making it front-loaded and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (single parameter, validation-focused), no annotations, and no output schema, the description does a good job explaining what the tool does and how it behaves. It could be more complete by mentioning the return format or error cases, but it covers the core functionality adequately.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the single parameter 'dic' with format examples. The description adds some context about the validation logic but doesn't provide additional parameter semantics beyond what's in the schema. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Validate a Czech DIČ') and the resource ('VAT ID'), distinguishing it from siblings like 'check_vat_payer' by focusing on format and checksum validation rather than status checking. It provides concrete details about the validation process, including format requirements and checksum rules.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage through the validation focus but doesn't explicitly state when to use this tool versus alternatives like 'check_vat_payer' or 'lookup_by_ico'. There's no guidance on prerequisites or exclusions, leaving the agent to infer context from the tool's purpose alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.