Skip to main content
Glama

Server Details

ADIS — Czech VAT-payer reliability (nespolehlivý plátce DPH) via MFČR SOAP

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
martinhavel/cz-agents-mcp
GitHub Stars
0
Server Listing
cz-agents-mcp

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.4/5 across 3 of 3 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool serves a clearly distinct purpose: single subject check, bulk check for up to 100 subjects, and full list retrieval. No overlap in functionality, and the descriptions highlight the appropriate use cases.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern with snake_case: check_bulk_dph_payer, check_dph_payer, list_unreliable_payers. The verbs 'check' and 'list' clearly indicate the action, and the nouns are specific to the domain.

Tool Count5/5

Three tools are well-suited for the narrow domain of Czech VAT payer reliability checks. They cover the essential operations without being too few or excessive.

Completeness5/5

The tool set covers the full lifecycle of VAT payer reliability queries: individual check with detailed information, bulk screening for efficiency, and retrieval of the entire unreliable list for local mirroring. No obvious gaps given the domain.

Available Tools

3 tools
check_bulk_dph_payerA
Read-only
Inspect

Bulk reliability check for up to 100 Czech subjects in one ADIS request. Lighter than the single-subject check — returns reliability status, accounts, and tax office, but no name/address. Useful for screening invoice-issuer lists or supplier portfolios. Returns one entry per input DIČ; entries with reliability NENALEZEN indicate the subject is not in the VAT registry.

ParametersJSON Schema
NameRequiredDescriptionDefault
dicsNoList of Czech DIČs (e.g. ["CZ27074358", "CZ12345678"]). At least one of icos/dics is required.
icosNoList of Czech IČOs. Will be converted to DIČ ("CZ${ico}").
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true and openWorldHint=true. The description adds that it returns reliability status, accounts, and tax office, but no name/address, and that NENALEZEN indicates absence from VAT registry. This discloses behavioral traits beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, all essential. First sentence states purpose and limit. Second compares to single-subject. Third gives use case and result interpretation. No fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only bulk check tool with complete param schema and no output schema, the description explains input expectations (DIČs, IČOs), output fields, and meaning of NENALEZEN. Lacks mention of rate limits or error handling, but is sufficient for typical use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema fully describes both parameters. The description does not add new parameter-specific details beyond what the schema provides, but it reinforces that the tool expects up to 100 subjects.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it's a bulk reliability check for up to 100 Czech subjects, distinguishing it from the single-subject check_dph_payer and list_unreliable_payers. It specifies the return fields (reliability status, accounts, tax office) and purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'Lighter than the single-subject check' and recommends it for 'screening invoice-issuer lists or supplier portfolios.' This provides clear context for when to use bulk vs. single, though it does not mention when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_dph_payerA
Read-only
Inspect

Check VAT-payer reliability for a single Czech subject. Returns reliability status (ANO/NE/NENALEZEN), subject type (VAT payer / identified person / VAT group / unreliable person / not found), name, address, published bank accounts (§ 96a ZDPH), and the date the subject became unreliable (when applicable). Returns null when the DIČ is not in the VAT registry.

ParametersJSON Schema
NameRequiredDescriptionDefault
dicNoCzech DIČ, e.g. "CZ27074358". Provide either ico or dic.
icoNoCzech IČO — 7 or 8 digits. The client converts to DIČ as "CZ${ico}". Provide either ico or dic.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses return values (status, type, name, address, bank accounts, unreliability date, null case) beyond the readOnlyHint and openWorldHint annotations. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences efficiently cover purpose, return fields, and the null case. No redundant information; front-loaded with key action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description fully explains the return values (status, type, name, address, bank accounts, date, null). This is complete for a single-check tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear parameter descriptions. The description adds no extra meaning beyond the schema (e.g., conversion from IČO to DIČ is already in schema). Baseline 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool checks VAT-payer reliability for a single Czech subject, listing specific return fields. This distinguishes it from sibling tools like check_bulk_dph_payer (multiple subjects) and list_unreliable_payers (listing unreliable payers).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for single subject checks but lacks explicit guidance on when to use this vs alternatives. It mentions the null return for missing DIČ, which provides some context, but no direct comparison to siblings or exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_unreliable_payersA
Read-only
Inspect

Return the full list of currently unreliable Czech VAT payers from ADIS. WARNING: response can be 50–100 MB (tens of thousands of entries). Intended for daily mirroring into a local database, not for ad-hoc inspection. For "is this specific company unreliable?" use check_dph_payer instead.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint and openWorldHint, but the description adds crucial context about the large payload and intended mirroring use case.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Efficiently structured with purpose first, then warning, then usage guidance; every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Fully describes the tool's purpose, size implications, intended use, and provides an alternative, despite lacking an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist, so schema coverage is 100%; the description adds no parameter details but doesn't need to.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool returns the full list of currently unreliable Czech VAT payers from ADIS, distinguishing it from sibling tools like check_dph_payer.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly warns about the large response size (50-100 MB), states it's for daily mirroring not ad-hoc use, and suggests an alternative for specific company checks.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.