adis
Server Details
ADIS — Czech VAT-payer reliability (nespolehlivý plátce DPH) via MFČR SOAP
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- martinhavel/cz-agents-mcp
- GitHub Stars
- 0
- Server Listing
- cz-agents-mcp
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.4/5 across 3 of 3 tools scored.
Each tool serves a clearly distinct purpose: single subject check, bulk check for up to 100 subjects, and full list retrieval. No overlap in functionality, and the descriptions highlight the appropriate use cases.
All tool names follow a consistent verb_noun pattern with snake_case: check_bulk_dph_payer, check_dph_payer, list_unreliable_payers. The verbs 'check' and 'list' clearly indicate the action, and the nouns are specific to the domain.
Three tools are well-suited for the narrow domain of Czech VAT payer reliability checks. They cover the essential operations without being too few or excessive.
The tool set covers the full lifecycle of VAT payer reliability queries: individual check with detailed information, bulk screening for efficiency, and retrieval of the entire unreliable list for local mirroring. No obvious gaps given the domain.
Available Tools
3 toolscheck_bulk_dph_payerARead-onlyInspect
Bulk reliability check for up to 100 Czech subjects in one ADIS request. Lighter than the single-subject check — returns reliability status, accounts, and tax office, but no name/address. Useful for screening invoice-issuer lists or supplier portfolios. Returns one entry per input DIČ; entries with reliability NENALEZEN indicate the subject is not in the VAT registry.
| Name | Required | Description | Default |
|---|---|---|---|
| dics | No | List of Czech DIČs (e.g. ["CZ27074358", "CZ12345678"]). At least one of icos/dics is required. | |
| icos | No | List of Czech IČOs. Will be converted to DIČ ("CZ${ico}"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true and openWorldHint=true. The description adds that it returns reliability status, accounts, and tax office, but no name/address, and that NENALEZEN indicates absence from VAT registry. This discloses behavioral traits beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, all essential. First sentence states purpose and limit. Second compares to single-subject. Third gives use case and result interpretation. No fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only bulk check tool with complete param schema and no output schema, the description explains input expectations (DIČs, IČOs), output fields, and meaning of NENALEZEN. Lacks mention of rate limits or error handling, but is sufficient for typical use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema fully describes both parameters. The description does not add new parameter-specific details beyond what the schema provides, but it reinforces that the tool expects up to 100 subjects.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it's a bulk reliability check for up to 100 Czech subjects, distinguishing it from the single-subject check_dph_payer and list_unreliable_payers. It specifies the return fields (reliability status, accounts, tax office) and purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Lighter than the single-subject check' and recommends it for 'screening invoice-issuer lists or supplier portfolios.' This provides clear context for when to use bulk vs. single, though it does not mention when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_dph_payerARead-onlyInspect
Check VAT-payer reliability for a single Czech subject. Returns reliability status (ANO/NE/NENALEZEN), subject type (VAT payer / identified person / VAT group / unreliable person / not found), name, address, published bank accounts (§ 96a ZDPH), and the date the subject became unreliable (when applicable). Returns null when the DIČ is not in the VAT registry.
| Name | Required | Description | Default |
|---|---|---|---|
| dic | No | Czech DIČ, e.g. "CZ27074358". Provide either ico or dic. | |
| ico | No | Czech IČO — 7 or 8 digits. The client converts to DIČ as "CZ${ico}". Provide either ico or dic. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses return values (status, type, name, address, bank accounts, unreliability date, null case) beyond the readOnlyHint and openWorldHint annotations. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences efficiently cover purpose, return fields, and the null case. No redundant information; front-loaded with key action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description fully explains the return values (status, type, name, address, bank accounts, date, null). This is complete for a single-check tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear parameter descriptions. The description adds no extra meaning beyond the schema (e.g., conversion from IČO to DIČ is already in schema). Baseline 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool checks VAT-payer reliability for a single Czech subject, listing specific return fields. This distinguishes it from sibling tools like check_bulk_dph_payer (multiple subjects) and list_unreliable_payers (listing unreliable payers).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for single subject checks but lacks explicit guidance on when to use this vs alternatives. It mentions the null return for missing DIČ, which provides some context, but no direct comparison to siblings or exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_unreliable_payersARead-onlyInspect
Return the full list of currently unreliable Czech VAT payers from ADIS. WARNING: response can be 50–100 MB (tens of thousands of entries). Intended for daily mirroring into a local database, not for ad-hoc inspection. For "is this specific company unreliable?" use check_dph_payer instead.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint and openWorldHint, but the description adds crucial context about the large payload and intended mirroring use case.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Efficiently structured with purpose first, then warning, then usage guidance; every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Fully describes the tool's purpose, size implications, intended use, and provides an alternative, despite lacking an output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, so schema coverage is 100%; the description adds no parameter details but doesn't need to.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool returns the full list of currently unreliable Czech VAT payers from ADIS, distinguishing it from sibling tools like check_dph_payer.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly warns about the large response size (50-100 MB), states it's for daily mirroring not ad-hoc use, and suggests an alternative for specific company checks.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.