MCP Europe Tools
Server Details
European data validation tools for AI agents. Validates Portuguese NIF, IBAN for 18 European countries, VAT rates for all EU countries, Portuguese public holidays, and European number formatting.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.5/5 across 11 of 11 tools scored. Lowest: 2.9/5.
Each tool targets a distinct function: working days calculation, number formatting, country-specific holidays, VAT rates, and various validation tools (IBAN, Portuguese NIF, Spanish fiscal IDs, French SIRET/TVA). There is no overlap, even among validation tools as they apply to different countries or systems.
Most tools follow a verb_noun pattern using snake_case (e.g., calculate_working_days, get_portugal_holidays, validate_iban). Minor deviations include 'format_number_european' (adjective before noun) and the use of underscores for country codes (validate_nif_es, validate_tva_fr), but overall pattern is predictable.
11 tools cover essential European business operations (working days, holidays, VAT, number formatting, IBAN, fiscal identifiers) without being overwhelming. Each tool serves a clear purpose and the count feels right for the server's scope.
The server covers a useful subset of European functionality but has notable gaps: holidays are only for France, Portugal, Spain; VAT rates for 18 countries; validation only for Portugal, Spain, France. Lacks many other EU countries and common operations like currency conversion.
Available Tools
11 toolscalculate_working_daysARead-onlyIdempotentInspect
Counts the number of working days between two dates (inclusive), excluding Saturdays, Sundays, and all 10 Portuguese national public holidays. Returns { start_date, end_date, working_days: number }. Use when calculating Portuguese invoice payment deadlines (30/60/90 days), legal notice periods, project milestones, SLA response times, or any business process governed by Portuguese working days. Input dates must be in YYYY-MM-DD format.
| Name | Required | Description | Default |
|---|---|---|---|
| end_date | Yes | End date in YYYY-MM-DD format, inclusive. Example: '2026-01-31' | |
| start_date | Yes | Start date in YYYY-MM-DD format, inclusive. Example: '2026-01-01' |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It clearly states what the tool does (calculates working days with exclusions), but doesn't address important behavioral aspects like whether the calculation is inclusive or exclusive of start/end dates, timezone considerations, or error handling for invalid dates. The description doesn't contradict annotations since none exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that communicates the core functionality without unnecessary words. It's appropriately sized for a straightforward calculation tool and front-loads the essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a calculation tool with 2 parameters and no output schema, the description covers the basic purpose but lacks important contextual details. It doesn't explain what constitutes a 'working day' beyond exclusions, doesn't specify whether weekends mean Saturday-Sunday or could vary, and doesn't indicate the return format or whether holidays are dynamically fetched from get_portugal_holidays.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters with format requirements. The description doesn't add any parameter-specific information beyond what's in the schema. It implies date parameters but doesn't provide additional context about date ranges, boundary conditions, or format variations.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verb ('calculates') and resource ('working days'), and distinguishes it from siblings by specifying exclusion criteria (weekends and Portuguese public holidays). It's not a tautology of the name, as it adds important qualifiers about what constitutes 'working days'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool (calculating working days between dates with Portuguese holiday exclusions), but doesn't explicitly state when not to use it or name alternatives. For example, it doesn't mention that get_portugal_holidays could be used separately or that format_number_european might be needed for formatting results.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
format_number_europeanBRead-onlyIdempotentInspect
Formats a number using the locale conventions of a specific European country, applying the correct decimal separator and thousands separator. Returns { original: number, formatted: string, locale: string, country_code: string }. Different European countries use different conventions — Portugal and most of continental Europe use '1.234,56' (dot as thousands, comma as decimal), while Ireland uses '1,234.56'. Supports PT, ES, FR, DE, IT, NL, BE, PL, SE, DK, FI, AT, IE, GR, HU, RO. Use when displaying prices, measurements, or any numeric value to end users in a specific European country.
| Name | Required | Description | Default |
|---|---|---|---|
| number | Yes | The numeric value to format. Example: 1234.56 | |
| decimals | No | Number of decimal places. Defaults to 2. Use 0 for whole numbers, 2 for prices. | |
| country_code | Yes | Two-letter country code for the target locale. Example: 'PT', 'FR', 'DE' |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool formats numbers but doesn't describe behavioral traits like whether it's read-only (likely, but not confirmed), error handling for invalid inputs, performance characteristics, or what the output looks like (e.g., string format). For a tool with no annotations, this leaves significant gaps in understanding its operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence: 'Formats a number according to European locale conventions.' It is front-loaded with the core purpose, has zero wasted words, and is appropriately sized for a straightforward formatting tool. Every part of the sentence contributes to understanding the tool's function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (3 parameters, no output schema, no annotations), the description is minimally adequate. It states what the tool does but lacks details on output format, error conditions, or behavioral context. Without annotations or an output schema, the description should do more to explain the result (e.g., returns a formatted string) and usage constraints, but it meets a basic threshold for a simple formatting operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with clear documentation for all parameters (number, decimals, country_code). The description adds no additional semantic meaning beyond what's in the schema—it doesn't explain the significance of 'European locale conventions' in relation to the parameters or provide examples. With high schema coverage, the baseline score of 3 is appropriate, as the description doesn't compensate but also doesn't detract.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Formats a number according to European locale conventions.' It specifies the verb ('formats') and resource ('a number'), and while it doesn't explicitly differentiate from siblings, the function is distinct from data retrieval or validation tools like get_portugal_holidays or validate_iban. However, it could be more specific about what 'European locale conventions' entail (e.g., decimal separators, thousands separators).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention any prerequisites, such as needing a valid country code, or compare it to other formatting tools (though none are listed among siblings). The context is implied (formatting numbers for European locales), but there are no explicit when/when-not instructions or named alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_france_holidaysARead-onlyIdempotentInspect
Returns all French national public holidays for a given year as a structured list. Each holiday includes { date: 'YYYY-MM-DD', name: string, name_en: string }. Returns 11 mandatory holidays defined by French law. Easter-dependent holidays (Easter Monday, Ascension Thursday, Whit Monday) are dynamically calculated for the requested year using the Anonymous Gregorian algorithm. Use when calculating French business deadlines, delivery dates, or scheduling tasks that must avoid non-working days in France.
| Name | Required | Description | Default |
|---|---|---|---|
| year | Yes | Calendar year as a 4-digit integer. Example: 2026 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already confirm read-only and idempotent behavior. The description adds value by noting that Easter-dependent holidays are calculated dynamically for the given year, which is useful behavioral context beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is four sentences, front-loaded with the main purpose. Each sentence adds relevant information without redundancy. Minor improvement could merge the Easter-dependent note into the main description.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description fully explains return content: all 11 national holidays, dates in YYYY-MM-DD format, names in French and English, and special handling of Easter-dependent holidays. Complete for the tool's simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% coverage with a description for 'year'. The description adds meaning by specifying the output format (YYYY-MM-DD) and that names are returned in both French and English, enriching the parameter's semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description starts with a clear verb+resource: 'Returns French national public holidays for any given year.' It explicitly distinguishes from sibling tools like get_portugal_holidays and get_spain_holidays by focusing on France.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage scenarios: 'Use this tool when calculating delivery dates, scheduling appointments, computing working days, or any task requiring knowledge of non-working days in France.' It does not mention exclusions or alternatives, but the context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_portugal_holidaysBRead-onlyIdempotentInspect
Returns all Portuguese national public holidays for a given year as a structured list. Each holiday includes { date: 'YYYY-MM-DD', name: string, name_en: string }. Returns 10 mandatory national holidays defined by Portuguese law. Use when calculating business deadlines, delivery dates, payment due dates, SLA periods, or scheduling tasks that must avoid non-working days in Portugal. Does not include municipal or regional holidays (e.g. Lisbon June 13, Porto June 24) which vary by city.
| Name | Required | Description | Default |
|---|---|---|---|
| year | Yes | Calendar year as a 4-digit integer. Example: 2026 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It states the basic function but lacks behavioral details such as data source, update frequency, error handling, rate limits, or response format. For a read-only tool with no annotations, this leaves significant gaps in understanding its operational characteristics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It is front-loaded with the core functionality, making it easy to understand immediately. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is incomplete for a data retrieval tool. It doesn't explain what the return values look like (e.g., list format, holiday details), potential errors, or data freshness. The simplicity of the tool (1 parameter) doesn't compensate for these missing contextual elements.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'year' fully documented in the schema. The description adds no additional parameter semantics beyond implying the year is required for filtering. Baseline 3 is appropriate as the schema handles parameter documentation adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Returns') and resource ('Portuguese public holidays'), with precise scope ('for a given year'). It distinguishes from sibling tools by focusing on holiday data retrieval rather than formatting, tax rates, or validation functions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. While the purpose is clear, there's no mention of prerequisites, limitations, or comparison with other holiday-related tools (though none exist among siblings). The description assumes the context without explicit usage instructions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_spain_holidaysBRead-onlyIdempotentInspect
Returns all Spanish national public holidays for a given year as a structured list. Each holiday includes { date: 'YYYY-MM-DD', name: string, name_en: string }. Returns 9 mandatory national holidays defined by Spanish law. Use when calculating business deadlines, delivery dates, or scheduling tasks that must avoid non-working days in Spain. Does not include regional holidays that vary by autonomous community (Catalonia, Madrid, Andalusia, etc.) — only nationally mandated holidays are returned.
| Name | Required | Description | Default |
|---|---|---|---|
| year | Yes | Calendar year as a 4-digit integer. Example: 2026 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool returns data but doesn't cover aspects like rate limits, error handling, data format, or whether it's a read-only operation. For a tool with zero annotation coverage, this leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core functionality without unnecessary words. It directly communicates the tool's purpose, making it easy to parse and understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (one parameter, no output schema, no annotations), the description is minimally adequate but lacks depth. It doesn't explain return values or behavioral traits, which are important for a tool with no structured output or annotation support. It meets basic needs but could be more informative.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with the single parameter 'year' fully documented in the schema. The description adds no additional meaning beyond what the schema provides, such as format details or constraints. With high schema coverage, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Returns') and resource ('Spanish national public holidays for a given year'), making the purpose immediately understandable. It doesn't explicitly differentiate from sibling tools like 'get_portugal_holidays' beyond the country specification, but the scope is well-defined.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. While 'get_portugal_holidays' is a sibling, the description doesn't mention it or other tools like 'calculate_working_days' that might overlap in holiday-related contexts. Usage is implied by the description but not explicitly stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_vat_rateCRead-onlyIdempotentInspect
Returns all VAT (Value Added Tax) rates for a given EU country — standard, reduced, intermediate, and super-reduced rates where applicable, as numeric percentages. Returns { country, standard, reduced?, intermediate?, superreduced? } for supported countries, or { error, available } listing all valid codes if the country is not found. Supports 18 EU member states: PT, ES, FR, DE, IT, NL, BE, PL, SE, DK, FI, AT, IE, GR, HU, RO, CZ, HR. Use when calculating EU cross-border invoice tax, determining correct rate for e-commerce checkout by customer country, generating compliant VAT breakdowns, or any workflow requiring accurate and current EU VAT rates per jurisdiction.
| Name | Required | Description | Default |
|---|---|---|---|
| country_code | Yes | Two-letter ISO 3166-1 alpha-2 country code. Example: 'PT' for Portugal, 'FR' for France, 'DE' for Germany |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool returns VAT rates but doesn't cover key aspects like whether it's a read-only operation, potential rate limits, error handling for invalid inputs, or the format of the returned data. This leaves significant gaps in understanding how the tool behaves in practice.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence that efficiently conveys the tool's purpose without unnecessary details. It's front-loaded and wastes no words, making it easy to grasp quickly. This is an excellent example of conciseness in tool descriptions.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description is incomplete. It doesn't explain what the return values look like (e.g., rate percentages, effective dates), error conditions, or behavioral traits like idempotency. For a tool that likely involves financial data, this omission could lead to confusion or misuse.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description doesn't add any parameter semantics beyond what the input schema provides. The schema has 100% coverage, clearly documenting the single required parameter 'country_code' with examples. Since the description doesn't elaborate further, it meets the baseline score of 3, as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Returns the VAT rates for a European country.' It specifies the action ('Returns') and the resource ('VAT rates for a European country'), making it easy to understand what the tool does. However, it doesn't explicitly differentiate from sibling tools like 'validate_nif' or 'validate_iban,' which serve different purposes, so it doesn't reach the highest score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention any prerequisites, such as needing a valid country code, or suggest when not to use it (e.g., for non-European countries). Without such context, users might misuse it or overlook better options among the siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_ibanCRead-onlyIdempotentInspect
Validates an IBAN (International Bank Account Number) using the ISO 13616 MOD-97 algorithm. Supports 18 European countries: PT, ES, FR, DE, IT, NL, BE, PL, SE, DK, FI, AT, IE, GR, HU, RO, CZ, HR. Returns { valid: boolean, country: string, iban: string } — country is extracted from the 2-letter prefix. Returns { valid: false, reason: string } for malformed input. Spaces are automatically stripped before validation. Use when validating supplier bank details for SEPA transfers, processing direct debit mandates, verifying payment data in e-commerce checkouts, or any workflow requiring a verified EU bank account number. Validates structure and checksum only — does not confirm account existence.
| Name | Required | Description | Default |
|---|---|---|---|
| iban | Yes | European IBAN with or without spaces. Example: 'PT50 0002 0123 1234 5678 9015 4' or 'PT50000201231234567890154' |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool validates IBANs but doesn't describe what validation entails (e.g., format checks, checksum verification), error handling, or output format. For a validation tool with zero annotation coverage, this leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It is appropriately sized and front-loaded, with every part contributing value, earning the highest score for conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description is incomplete for a validation tool. It doesn't explain what the validation returns (e.g., success/failure, error messages) or behavioral aspects like performance or limitations. With low complexity but insufficient disclosure, it falls short of being fully informative.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the 'iban' parameter documented as 'The IBAN to validate.' The description adds no additional parameter details beyond this, such as format examples or constraints. Since the schema already provides adequate parameter information, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Validates an IBAN number from any European country.' It specifies the verb ('validates') and resource ('IBAN number'), and the geographic scope ('any European country') provides useful context. However, it doesn't explicitly differentiate from sibling tools like 'validate_nif' (which validates a different identifier type), so it doesn't reach the highest score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'validate_nif' for different validation needs or specify scenarios where IBAN validation is required (e.g., before financial transactions). Without any usage context or exclusions, the score is low.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_nifCRead-onlyIdempotentInspect
Validates a Portuguese NIF (Número de Identificação Fiscal) — the 9-digit tax identification number issued by the Portuguese Tax Authority (AT) to individuals and companies. Applies the official modulo-11 checksum algorithm to verify the check digit. Returns { valid: true, nif: string } for valid NIFs, or { valid: false, reason: string } for invalid format or failed checksum. First-digit rules are enforced: 1–3 for individuals, 5 for corporations, 6 for public entities, 7–8 for other entities, 9 for occasional taxpayers. Use when processing Portuguese invoices (faturas), onboarding suppliers, validating user registrations, or any fiscal compliance workflow. Does not query the AT database — offline format and checksum validation only.
| Name | Required | Description | Default |
|---|---|---|---|
| nif | Yes | 9-digit Portuguese NIF, with or without spaces. Example: '123456789' or '123 456 789' |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure but offers minimal information. It doesn't describe what validation entails (algorithm, format checks, checksum verification), what the expected output format might be, error conditions, or performance characteristics. The description only states what the tool does at the highest level.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise - a single sentence that directly states the tool's purpose without any unnecessary words. It's perfectly front-loaded with the core functionality. Every word earns its place in this minimal but complete statement of purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a validation tool with no annotations and no output schema, the description is insufficient. It doesn't explain what constitutes a valid NIF, what validation checks are performed, what the return value looks like (boolean, object with details, etc.), or error handling. The agent would need to guess about the tool's behavior and output format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% description coverage, with the single parameter 'nif' clearly documented as 'The Portuguese NIF to validate.' The description adds no additional parameter information beyond what the schema provides, which is acceptable given the high schema coverage but doesn't enhance understanding of the parameter's format or constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('validates') and resource ('Portuguese NIF'), making it immediately understandable. However, it doesn't distinguish this tool from its sibling 'validate_iban', which performs a similar validation function but for different data types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention the sibling 'validate_iban' tool or explain that this is specifically for Portuguese tax IDs rather than other validation tools. There's no context about prerequisites or typical use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_nif_esBRead-onlyIdempotentInspect
Validates Spanish tax identification numbers — NIF (DNI, 8 digits + check letter, for Spanish citizens), NIE (Número de Identidad de Extranjero, starts with X/Y/Z, for foreign residents), and CIF (Código de Identificación Fiscal, letter + 7 digits + control, for companies). Automatically detects the document type. Returns { valid: boolean, type: 'NIF'|'NIE'|'CIF', id: string }. Use when processing Spanish invoices, e-commerce orders, supplier registrations, or any document requiring a verified Spanish fiscal identifier.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Spanish NIF, NIE or CIF with or without spaces. Examples: '12345678Z' (NIF), 'X1234567L' (NIE), 'B12345678' (CIF) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states what the tool does but doesn't describe how it validates (e.g., format checks, algorithm, error handling), what the output looks like (success/failure, error messages), or any limitations (e.g., only validates Spanish IDs). This leaves significant gaps in understanding the tool's behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's function without unnecessary words. It is front-loaded with the core purpose and avoids redundancy, making it easy to understand at a glance.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (validating multiple ID types) and lack of annotations or output schema, the description is insufficient. It doesn't explain the validation process, return values (e.g., boolean result, detailed errors), or edge cases (e.g., invalid formats). For a validation tool with no structured output, more context is needed to ensure proper use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with the parameter 'id' clearly documented as 'The Spanish NIF, NIE or CIF to validate'. The description adds no additional parameter semantics beyond what the schema provides, such as format examples or validation rules. Baseline 3 is appropriate since the schema adequately covers the parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verb ('validates') and resources (Spanish NIF, NIE, CIF). It distinguishes from sibling tools like 'validate_iban' by specifying the type of Spanish identification numbers being validated, making it unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention scenarios where validation is needed, prerequisites, or how it differs from similar tools like 'validate_nif' (which might handle different formats or regions). Without such context, users must infer usage based on the name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_siretARead-onlyIdempotentInspect
Validates a French SIRET (Système d'Identification du Répertoire des Établissements) number using the official Luhn algorithm. SIRET is a 14-digit number — the first 9 digits are the SIREN (company identifier) and the last 5 digits identify the specific establishment. Returns { valid: boolean, siren: string, establishment: string, siret: string }. Use when processing French invoices (factures), validating supplier registrations, or any B2B transaction requiring a verified French business establishment identifier. Handles the La Poste special case automatically.
| Name | Required | Description | Default |
|---|---|---|---|
| siret | Yes | 14-digit French SIRET, with or without spaces/dashes. Example: '732 829 320 00074' or '73282932000074' |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds valuable context beyond annotations: specifies Luhn algorithm, format handling, and return fields. No contradiction with readOnlyHint and idempotentHint.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is concise, front-loaded with purpose, then usage context, and finally return values. No unnecessary information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, description clearly states what is returned. Parameter details, algorithm, and use cases are sufficient for correct agent invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema covers parameter fully (100% coverage), and description adds automatic space/dash removal and an example, enhancing clarity.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool validates French SIRET numbers using Luhn algorithm, explains what SIRET is, and distinguishes it from sibling tools like validate_iban and validate_nif.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit use cases (French invoices, supplier registrations, B2B transactions) but does not explicitly state when not to use or mention alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_tva_frARead-onlyIdempotentInspect
Validates a French TVA intracom (VAT) number — the EU VAT identifier for French companies. Format is 'FR' + 2 alphanumeric key characters + 9-digit SIREN. Returns { valid: boolean, key: string, siren: string, tva: string }. When the key is numeric, validates using the official formula: key = (12 + 3 × (SIREN mod 97)) mod 97. Use when validating French supplier VAT numbers, processing cross-border EU invoices, or any intra-EU transaction requiring a verified French VAT identifier.
| Name | Required | Description | Default |
|---|---|---|---|
| tva | Yes | French TVA intracom number with or without spaces. Example: 'FR 40 303 265 045' or 'FR40303265045' |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnly and idempotent. The description adds value by stating that spaces are removed automatically, and specifying the return content (validity, SIREN, key digits). No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four concise sentences with no fluff. Front-loaded with the main purpose, then structure, usage, and returns. Each sentence adds essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool is simple with one parameter. Description covers input format, preprocessing, and return values. Missing details on error handling or edge cases, but still sufficient for typical use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a good parameter description and examples. The tool description adds extra detail on the number format (2 alphanumeric + 9 digits), which goes beyond the schema description. Together, they provide clear parameter semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool validates a French TVA number, explains its structure (FR + 2 alphanumeric + 9 digits), and distinguishes it from sibling validation tools (IBAN, NIF, SIRET) by specifying the French VAT context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit use cases: processing French invoices, validating supplier VAT numbers, cross-border EU transactions. Does not mention when not to use, but the specificity inherently excludes non-French VAT scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!