VAT Validator MCP
Server Details
Validate EU, UK, AU VAT numbers for AI agents. EU ViDA e-invoicing compliance.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- OjasKord/vat-validator-mcp
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 6 of 6 tools scored.
Most tools have distinct purposes with clear boundaries: validate_vat for basic verification, analyse_vat_risk for AI fraud assessment, compare_invoice_details for invoice matching, get_vat_rates for tax rates, and batch_validate for bulk operations. However, validate_uk_vat overlaps significantly with validate_vat (which already handles UK VAT via auto-detection), creating potential confusion about when to use the specialized UK tool versus the general one.
Tool names follow a consistent verb_noun pattern with snake_case throughout (e.g., analyse_vat_risk, batch_validate, compare_invoice_details). The only minor deviation is validate_uk_vat, which includes a country code in the noun part, but this still fits the overall naming convention without breaking consistency.
With 6 tools, the server is well-scoped for VAT validation and fraud detection. Each tool serves a specific function in the workflow, from basic validation to advanced risk analysis and batch processing, making the count appropriate and manageable for the domain without being overwhelming or insufficient.
The tool set provides comprehensive coverage for VAT-related tasks: basic validation (validate_vat, validate_uk_vat), fraud detection (analyse_vat_risk, compare_invoice_details), bulk operations (batch_validate), and tax rate lookup (get_vat_rates). There are no obvious gaps; it supports the full lifecycle from onboarding to payment verification and compliance checks.
Available Tools
6 toolsanalyse_vat_riskAInspect
Call this tool after validate_vat returns a result, when your agent needs an AI-powered fraud risk assessment before proceeding with a transaction. Uses AI to synthesise registry data with transaction context to detect fraud signals that raw validation misses - this is NOT a simple database lookup. Returns a CLEAR/REVIEW/BLOCK recommendation with specific reasons. Catches: name mismatches between invoice and registry, newly registered companies with large invoice values, dormant status, shell company indicators, address anomalies. Use before approving any payment or signing any contract, especially with first-time counterparties. LEGAL NOTICE: Results are informational only, not fraud investigation advice. Full terms: kordagencies.com/terms.html. Free tier: first 20 calls/month, no API key needed.
| Name | Required | Description | Default |
|---|---|---|---|
| vat_number | Yes | The VAT number that was validated | |
| invoice_amount | No | Optional - invoice or transaction amount in local currency. | |
| validation_result | Yes | The full result object returned by validate_vat or validate_uk_vat | |
| invoice_company_name | No | Optional - company name as it appears on the invoice. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and does so well by disclosing key behavioral traits: it describes the AI-powered analysis process, lists specific fraud signals caught (e.g., name mismatches, shell company indicators), mentions rate limits ('Free tier: first 20 calls/month'), and includes legal disclaimers. It does not contradict any annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, starting with the core purpose and usage guidelines. However, it includes some verbose elements like the full legal notice and URL, which slightly reduce efficiency, though most sentences add value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (AI analysis, fraud detection) and lack of annotations and output schema, the description is largely complete: it explains the purpose, usage, behavioral traits, and output format ('CLEAR/REVIEW/BLOCK recommendation with specific reasons'). It could briefly mention error handling or response structure, but it covers essential context well.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal parameter semantics beyond the schema, such as implying context for 'invoice_amount' and 'invoice_company_name' in fraud detection, but it does not provide additional syntax or format details. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs an 'AI-powered fraud risk assessment' that 'synthesises registry data with transaction context to detect fraud signals,' distinguishing it from simple validation tools like validate_vat. It specifies the verb (assess risk) and resource (VAT-related transactions) with explicit differentiation from siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage instructions: 'Call this tool after validate_vat returns a result' and 'Use before approving any payment or signing any contract, especially with first-time counterparties.' It also implies alternatives by noting this is not a 'simple database lookup,' contrasting with validation siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
batch_validateAInspect
Call this tool when your agent needs to verify multiple businesses at once - for supplier onboarding batches, auditing your entire vendor database, running monthly compliance checks, or cleaning a CRM import. Up to 10 VAT numbers per call across any mix of EU, UK, and Australian businesses. Run this monthly on all active vendors - registrations can lapse. LEGAL NOTICE: Results are informational only, not tax advice. Full terms: kordagencies.com/terms.html. Paid API key required.
| Name | Required | Description | Default |
|---|---|---|---|
| vat_numbers | Yes | Array of VAT numbers with country prefixes (max 10) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively adds context beyond basic functionality: it specifies rate limits ('max 10 VAT numbers per call'), legal disclaimers ('Results are informational only, not tax advice'), and prerequisites ('Paid API key required'). However, it lacks details on error handling, response format, or authentication specifics, leaving some gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, starting with the core use case. Every sentence adds value: the first outlines purpose and scenarios, the second specifies constraints and frequency, and the third covers legal and API requirements. It could be slightly more concise by merging some clauses, but overall it's efficient with minimal waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (batch validation tool with no annotations and no output schema), the description is fairly complete. It covers purpose, usage guidelines, constraints, legal notes, and prerequisites. However, it lacks details on output format, error cases, or specific behavioral traits like idempotency or side effects, which could be important for an agent invoking this tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the parameter (vat_numbers as an array of strings with max 10 items). The description adds marginal value by reiterating the max limit and specifying country prefixes (EU, UK, Australian), but doesn't provide additional syntax or format details beyond what the schema implies. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('verify multiple businesses at once') and resources ('businesses'), distinguishing it from siblings by emphasizing batch processing for multiple VAT numbers. It explicitly mentions the scope (EU, UK, Australian businesses) and capacity (up to 10 VAT numbers), which sets it apart from single-validation tools like validate_vat or validate_uk_vat.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool, listing specific scenarios like 'supplier onboarding batches, auditing your entire vendor database, running monthly compliance checks, or cleaning a CRM import.' It also advises 'Run this monthly on all active vendors - registrations can lapse,' giving clear context and frequency. No explicit alternatives are named, but the batch nature implicitly distinguishes it from single-validation siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_invoice_detailsAInspect
Call this tool when your agent has received an invoice and needs to verify the supplier details on the invoice match official government registry records. Uses AI to compare the company name, address, and VAT number claimed on the invoice against validated registry data, flagging any discrepancies that could indicate fraud, impersonation, or error. A mismatch between the name on an invoice and the registered name for that VAT number is one of the most common invoice fraud signals. Use before approving payment on any invoice from a supplier you have not previously verified. LEGAL NOTICE: Results are informational only, not fraud investigation advice. Full terms: kordagencies.com/terms.html. Free tier: first 20 calls/month, no API key needed.
| Name | Required | Description | Default |
|---|---|---|---|
| invoice_address | No | Address as it appears on the invoice (optional) | |
| validation_result | Yes | The full result object returned by validate_vat or validate_uk_vat for this VAT number | |
| invoice_vat_number | Yes | VAT number as it appears on the invoice | |
| invoice_company_name | Yes | Company name as it appears on the invoice |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it uses AI for comparison, flags discrepancies for fraud detection, includes a legal disclaimer about informational-only results, and specifies free tier limits (20 calls/month, no API key). It doesn't mention rate limits beyond the free tier or error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with the core purpose and usage context. The legal notice and terms reference are necessary but slightly lengthen the text. Most sentences earn their place by adding value beyond structured fields.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (fraud detection tool with AI comparison), no annotations, and no output schema, the description does well by explaining the tool's purpose, usage context, behavioral traits, and limitations. It could benefit from more detail about output format or error cases, but covers the essential context for a tool with this function.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters. The description adds context by explaining what the parameters represent (invoice details to compare against registry data) and that validation_result should come from validate_vat or validate_uk_vat tools, but doesn't provide additional syntax or format details beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: to verify supplier details on an invoice against government registry records using AI, specifically comparing company name, address, and VAT number. It distinguishes itself from siblings by focusing on invoice verification rather than validation, risk analysis, or batch operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: 'when your agent has received an invoice and needs to verify the supplier details' and 'Use before approving payment on any invoice from a supplier you have not previously verified.' It also distinguishes from siblings by specifying it's for invoice verification rather than general VAT validation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_vat_ratesAInspect
Call this tool when your agent needs to calculate the correct tax amount for a B2B or B2C transaction involving an EU, UK, or Australian business. Use before generating any quote, invoice, or pricing calculation for cross-border sales. Returns standard rate and all reduced rates for any of the 27 EU member states, UK, or Australia. LEGAL NOTICE: Rates are indicative only - verify with official tax authority. Free tier: first 20 calls/month, no API key needed.
| Name | Required | Description | Default |
|---|---|---|---|
| country_code | No | ISO 2-letter country code (e.g. DE, FR, GB). Leave blank for all countries. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden and provides substantial behavioral context: it discloses the return format (standard and reduced rates), geographic scope (27 EU states, UK, Australia), legal disclaimer, and usage limits (free tier: 20 calls/month, no API key). It doesn't mention error handling or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured and front-loaded with purpose and usage, though the legal notice and free tier details could be slightly more integrated. Every sentence adds value, but minor redundancy exists between 'calculate tax amount' and 'use before...pricing calculation.'
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description does well by explaining return values, scope, and usage constraints. However, it lacks details on error cases, response format structure, or how 'all reduced rates' are presented, leaving some gaps for a tool with legal implications.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (parameter well-documented in schema), so baseline is 3. The description adds no additional parameter semantics beyond what's in the schema, though it implies the country_code parameter's purpose through context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'calculate the correct tax amount' with specific resources (EU, UK, Australian businesses) and transaction contexts (B2B/B2C). It distinguishes from siblings by focusing on rate retrieval rather than validation, analysis, or comparison.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit guidance is provided: 'Call this tool when your agent needs to calculate...' with specific timing ('before generating any quote, invoice, or pricing calculation') and context ('cross-border sales'). It clearly indicates when to use this tool versus when to use alternatives for different needs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_uk_vatAInspect
Call this tool when your agent is dealing with a UK business and needs to confirm they are genuinely registered with HMRC before onboarding them, paying an invoice, or signing a contract. Returns company name, registered address, and an HMRC consultation number for your audit trail. Also use to verify the company name on an invoice matches the registered name - a mismatch is a fraud red flag. LEGAL NOTICE: Results are informational only, not tax advice. Full terms: kordagencies.com/terms.html. Free tier: first 20 calls/month, no API key needed.
| Name | Required | Description | Default |
|---|---|---|---|
| vat_number | Yes | UK VAT number with or without GB prefix |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it returns specific data (company name, address, HMRC number), includes a legal disclaimer about informational use only, and states rate limits (20 free calls/month) and authentication requirements (no API key needed). It doesn't mention error handling or response formats.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with purpose, usage guidelines, return values, and operational details in logical order. While slightly dense due to legal and rate limit information, every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with no annotations and no output schema, the description provides strong contextual completeness: it explains the tool's purpose, when to use it, what it returns, legal constraints, and operational limits. The main gap is lack of output format details, but return values are described semantically.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description doesn't add parameter-specific details beyond what the schema already states about the VAT number format, but it contextualizes why this parameter matters for validation purposes.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: validating UK VAT numbers to confirm HMRC registration. It specifies the exact use cases (onboarding, paying invoices, signing contracts) and distinguishes it from sibling tools by focusing on UK-specific validation rather than risk analysis, batch processing, rate lookup, or generic validation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit guidance is provided on when to use this tool: 'when your agent is dealing with a UK business and needs to confirm they are genuinely registered with HMRC before onboarding them, paying an invoice, or signing a contract.' It also mentions verifying invoice name matches as a fraud check, giving clear context for application.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_vatAInspect
Call this tool any time your agent needs to confirm a business is real and legally registered before interacting with them. Use before onboarding a new supplier, customer, or contractor, before signing any B2B contract, before processing or issuing any invoice, before approving a vendor in procurement, or before enriching a CRM record with verified company data. The VAT number is the most reliable identifier for a registered EU, UK, or Australian business. Also use to catch fraud - scammers frequently use fake or stolen VAT numbers. Auto-detects country from prefix: EU VIES for all 27 EU states, HMRC for GB prefix, ABR for AU prefix. LEGAL NOTICE: Results are informational only, not tax advice. Full terms: kordagencies.com/terms.html. Free tier: first 20 calls/month, no API key needed.
| Name | Required | Description | Default |
|---|---|---|---|
| vat_number | Yes | VAT number with country prefix (e.g. DE123456789, GB123456789, FR12345678901) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behaviors: it mentions auto-detection of country prefixes, legal notice that results are informational only, free tier limits (20 calls/month, no API key), and terms link. However, it doesn't detail error handling or response format, leaving some gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with key usage scenarios, followed by operational details and legal notices. Some sentences could be more concise (e.g., the list of use cases is verbose), but overall it's well-structured with zero waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (validation with legal implications), no annotations, and no output schema, the description does a good job covering usage, behavioral traits, and limitations. However, it lacks details on response format or error cases, which would enhance completeness for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the single parameter. The description adds minimal value beyond the schema by mentioning examples (DE, GB, FR prefixes) and that VAT numbers are for EU, UK, or Australian businesses, but doesn't provide additional syntax or format details. Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'confirm a business is real and legally registered' using VAT numbers. It specifies the verb ('confirm') and resource ('business'), and distinguishes from siblings by focusing on validation rather than risk analysis, batch processing, rate lookup, or UK-specific validation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit guidance is provided on when to use: 'before onboarding a new supplier, customer, or contractor, before signing any B2B contract, before processing or issuing any invoice, before approving a vendor in procurement, or before enriching a CRM record.' It also specifies when to use for fraud detection and mentions auto-detection of country prefixes, giving clear context for application.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!