ENTIA Entity Verification MCP
Server Details
Spain's deepest business intelligence for AI agents. Verify companies via BORME, VIES, GLEIF and Wikidata. Access 52M+ records across 34 countries: entity lookup, VAT validation, BORME filings, zone economic profiles, and competitor search.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.5/5 across 8 of 8 tools scored.
Multiple tools have overlapping purposes: entity_lookup and borme_lookup both return BORME corporate history, and search_entities and get_competitors both list businesses by sector and location with nearly identical outputs. An agent could easily misselect between these.
Tool names are inconsistent: some start with verbs (get_competitors, search_entities, verify_vat), others are noun+noun (entity_lookup, zone_profile) or adjective+noun (ai_ready_profile). There is no clear verb_noun or noun_verb pattern, mixing styles arbitrarily.
With 8 tools, the count is reasonable for a domain focused on entity verification and business data. It feels slightly padded due to redundant tools (borme_lookup vs entity_lookup, get_competitors vs search_entities), but overall the scope is well-served.
The tool surface covers core operations (lookup, search, VAT verification, profile generation) but has notable gaps: no tool for updating or creating data (though not expected), and the redundancy of borme_lookup and get_competitors suggests missing distinct functionality like comparing entities or batch operations.
Available Tools
8 toolsai_ready_profileAI-Ready Profile (JSON-LD for LLMs)ARead-onlyIdempotentInspect
Convert any verified business into Schema.org JSON-LD ready to be cited by LLMs.
Use this when a user asks 'how do I make my business AI-ready?', 'give me JSON-LD for ChatGPT', 'what does an LLM need to recommend my company?', 'help my company appear in AI answers', or wants structured data optimized for LLM ingestion.
Returns the complete Schema.org @graph that ENTIA generates internally for every verified entity: 20+ fields + 11 additionalProperty including:
Verified legal identity (name, address, phone, geo, VAT)
ENTIA Verification Report (trust score 0-100, source chain, reconciliation)
Socioeconomic context (income, segment, ICE index for the entity's postal code)
Schema.org type mapped to the right business class (Dentist, LegalService, etc)
This is the same JSON-LD ENTIA serves at /v1/identity/{cc}/{sector}/{city}/{slug} and the same that LLMs (ChatGPT, Gemini, Claude, Perplexity) use for citation.
Inject the result inside your website's tag and you become elegible to be referenced by AI agents when users ask about your sector or location.
Free tier: 5 calls/day per IP. Pro: 1,000/month. Scale: 10,000/month.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Company name (e.g. 'Telefonica'), Spanish CIF (e.g. A28015865), EU VAT (e.g. ESA28015865), or LEI | |
| country | No | ISO country code (ES, GB, FR, DE...). Auto-detected when possible. | |
| embed_html | No | If true, also returns ready-to-paste <script type='application/ld+json'> tag for direct injection in your website's <head>. |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint, idempotentHint, and non-destructive behavior. The description adds significant context beyond annotations by detailing the output structure (20+ fields, verification report, socioeconomic data) and noting it's the same JSON-LD served at a specific endpoint. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is fairly detailed but well-structured, starting with a concise purpose statement followed by bullet points. Every sentence adds value, though slightly verbose for the most concise standard. Still, it respects the principle of front-loading.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (implied) and only 3 parameters with full schema coverage, the description adequately covers what the tool does, what it returns, and how to use it. The output details (fields, verification report) are well explained, making it complete for agent decision-making.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions for each parameter. The description enhances the query parameter by listing specific accepted formats (e.g., CIF, VAT, LEI) beyond the schema's example, adding practical value for the agent.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool converts any verified business into Schema.org JSON-LD for LLMs, using a specific verb and resource. It distinguishes itself from sibling tools (like borme_lookup, entity_lookup) by focusing on AI-readiness and structured data output.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly lists user queries that trigger this tool (e.g., 'how do I make my business AI-ready?'), providing clear when-to-use guidance. While it does not explicitly mention when not to use, the context signals and sibling names imply it's complementary to other tools, so a 4 is appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
borme_lookupBORME LookupARead-onlyIdempotentInspect
Access the complete corporate history of any Spanish company from the BORME (Boletin Oficial del Registro Mercantil). Use when a user asks 'who are the directors of this company?', 'when was this company founded?', 'has this company had any capital changes?', 'what is this company's corporate purpose?', or needs official mercantile records for due diligence.
40.3 million official mercantile acts from 3.4 million unique Spanish companies, covering 2009-2026.
Returns: company name, CIF, province, act types (constitucion, nombramientos, ceses, ampliaciones de capital, reducciones, disoluciones, fusiones, escisiones), objeto social, CNAE code, and BORME publication dates.
Source: Registro Mercantil de Espana — official government gazette data extracted from BORME PDFs. Legally authoritative.
| Name | Required | Description | Default |
|---|---|---|---|
| q | No | Alias for query — pass here if your client serializes 'q' | |
| cif | No | Alias for query — pass CIF here if your client serializes 'cif' instead of 'query' | |
| name | No | Alias for query — pass company name here if your client serializes 'name' | |
| limit | No | Max mercantile acts to return (1-50) | |
| query | No | Spanish CIF (e.g. B80988678) or company name | |
| company | No | Alias for query — pass company name here if your client serializes 'company' |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and idempotentHint=true. The description adds valuable behavioral context: data range (2009-2026), scale (40.3M acts, 3.4M companies), return fields, and authoritative source. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (~100 words), front-loaded with purpose and usage examples, followed by stats, returns, and source. Every sentence adds value, no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given an output schema exists, the description sufficiently covers purpose, usage scenarios, return fields, data scope, and source. It provides complete context for understanding and invoking the tool without gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage with parameter descriptions already explaining that q, cif, name, company are aliases for query. The description does not add new parameter semantics beyond restating query methods. Baseline 3 is appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Access' and specific resource 'complete corporate history of any Spanish company from the BORME'. It provides concrete example queries, distinguishing it from sibling tools like entity_lookup or search_entities by focusing on official mercantile records.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly lists example user queries ('who are the directors?', 'when was it founded?') as triggers, giving clear context for use. However, it does not explicitly name sibling tools or state when not to use this tool, though the specificity implies it is for official Spanish company records.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
entity_lookupEntity LookupARead-onlyIdempotentInspect
Retrieve the verified identity profile of any business across 34 countries (5.5M entities). Use this when a user asks 'is this company legit?', 'who is behind this company?', 'check this CIF/VAT/LEI', or needs due diligence on any business.
Cross-references 5 authoritative sources in parallel:
ENTIA Registry (5.5M verified entities)
BORME / Registro Mercantil (40.3M official acts, 2009-2026)
VIES (EU VAT validation, 27 member states)
GLEIF (LEI global identifiers)
Wikidata (structured knowledge graph)
Returns: legal name, address, trust score (0-100), verification status, data coverage percentage, full source chain with timestamps, and BORME corporate history.
Accepts: CIF (B80988678), EU VAT with prefix (ESB80988678, FR12345678901), LEI (20-char alphanumeric), or company name.
| Name | Required | Description | Default |
|---|---|---|---|
| q | No | Alias for query — pass here if your client serializes 'q' instead of 'query' | |
| name | No | Alias for query — pass company name here if your client serializes 'name' | |
| query | No | CIF (e.g. B80988678), EU VAT (e.g. ESB80988678, FR12345678901), LEI (20 alphanumeric chars), or company name | |
| country | No | ISO country code (ES, GB, FR...). Auto-detected from VAT prefix if not provided |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint, openWorldHint, idempotentHint, and non-destructive. The description adds behavioral details: cross-references 5 authoritative sources, returns trust score and source chain, and accepts multiple identifier formats. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately lengthy but well-structured: purpose first, then usage scenarios, then sources, then return fields, then accepted formats. Every sentence adds value, though some redundancy could be trimmed.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (34 countries, 5 sources, multiple identifier formats), the description covers inputs, processing logic, outputs, and use cases. An output schema exists (not shown but indicated), so return value details are covered. Complete for agent invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for each parameter. The description adds extra meaning with examples (CIF, VAT, LEI), explains auto-detection of country from VAT prefix, and clarifies alias parameters (q, name). This goes beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves verified identity profiles for businesses across 34 countries, with specific use cases like 'is this company legit?'. It distinguishes itself from siblings like search_entities, borme_lookup, and verify_vat by focusing on multi-source entity verification.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit when-to-use scenarios with user queries, but does not explicitly mention when not to use it or compare with alternatives. However, the context signals list sibling tools, and the description implies unique capability.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_competitorsGet CompetitorsARead-onlyIdempotentInspect
Find all verified businesses competing in the same sector and city. Use when a user asks 'who are my competitors?', 'how many dentists are in Valencia?', 'list all law firms in Bilbao', 'what's the market density for gyms in Madrid?', or needs competitive intelligence for any sector and location.
Searches ENTIA's verified registry (5.5M entities, 26 sectors in Spain) for businesses matching the sector + city combination.
Returns for each competitor: legal name, full address, phone, website, rating, and canonical Entia Home URL for the full verified profile.
Useful for: competitive analysis, market entry assessment, franchise territory planning, zone saturation mapping, and local SEO benchmarking.
| Name | Required | Description | Default |
|---|---|---|---|
| city | Yes | City name (e.g. Madrid, Barcelona, Valencia, Sevilla, Bilbao, Malaga) | |
| limit | No | Max results (1-30) | |
| sector | Yes | Business sector: dental, legal, estetica, psicologia, talleres, veterinarios, reformas, inmobiliarias, asesorias, gimnasios, medicos, arquitectos, farmacias, opticas, fisioterapia, podologia, logopedia, nutricion, enfermeria... | |
| country | No | ISO country code | ES |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, and idempotentHint=true. The description adds context about the registry size and scope (5.5M entities, 26 sectors in Spain) and what fields are returned, which is helpful beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with a clear opening statement, followed by usage examples, data source details, returned fields, and use cases. No redundant sentences; every line contributes.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema, the description provides complete context: what the tool does, when to use it, data source, and return fields. No gaps identified.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with detailed descriptions for each parameter. The description reinforces these with examples (e.g., city names, sector list) and usage guidance for the limit parameter, adding value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool finds verified businesses in a sector and city. It uses specific verbs ('find all verified businesses competing') and distinguishes itself from siblings like search_entities by focusing on competitive intelligence.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage scenarios with example user queries. It lists applications like competitive analysis and market entry. However, it does not explicitly state when not to use it or how it differs from sibling tools like entity_lookup.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_showcaseShowcase — Curated Entity ExamplesARead-onlyIdempotentInspect
Curated examples of fully-enriched ENTIA entity profiles (FREE — does NOT count against quota).
Returns up to 50 prominent businesses (IBEX 35 + EU/LATAM giants like Mercadona, El Corte Ingles, SAP, LVMH, Volkswagen) with the full ENTIA stack:
Node 1: WebPage canonical metadata
Node 2: Organization with VAT, address, Wikidata Q-ID, sameAs
Node 3: Verification Report (BORME, VIES, source chain)
Node 4: Territorial Profile (income, unemployment, FTTH, ICE, property €/m² — Spain only)
Recent BORME acts (top 10) + total count
Top 5 sector competitors
Use this BEFORE specific lookups to see the data depth available, sample JSON-LD shape, or to build demos. This tool is FREE and bypasses the 5-req/day free-tier limit.
Categories: 'ibex' = IBEX 35 only · 'tech' / 'banking' / 'energy' / 'retail' = sector filter · 'spain' / 'eu' = country · 'all' = no filter.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max entities to return (default 10, max 50) | |
| category | No | Filter: 'all' | 'ibex' | 'tech' | 'banking' | 'energy' | 'retail' | 'spain' | 'eu' | all |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint, idempotentHint, and destructiveHint=false. The description adds behavioral details like free quota bypass, max 50 entities, and that it does not count against quota. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise yet comprehensive, with clear structure using bullet points for categories. Every sentence adds value, and the most important information (purpose, free, usage guidance) is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the output schema exists, the description appropriately focuses on usage and capability. It covers purpose, data depth, free status, and category filters, making it fully informative for an agent to decide when to use this tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are already documented. The description adds meaningful context for the category parameter with examples and explains the default and max for limit, going beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns curated examples of fully-enriched ENTIA entity profiles, with specific examples of entities and data nodes. It distinguishes itself from siblings by being free and bypassing quota limits.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly advises using this tool before specific lookups for data depth, sample JSON-LD shape, or demos. Also clarifies it's free and bypasses the free-tier limit, with clear category options.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_entitiesSearch EntitiesARead-onlyIdempotentInspect
Search the full verified business registry of Spain and 33 other countries. Use when a user asks 'find me a dentist in Madrid', 'list law firms in Barcelona', 'what veterinary clinics are in Sevilla', or any query that needs a list of real, verified businesses by sector and location.
Covers 5.5M entities. Spain has the deepest coverage: 1.4M entities across 26 professional sectors including dental, legal, medical, veterinary, psychology, real estate, automotive repair, aesthetics, accounting, gyms, and more.
Every result includes: verified legal name, full address, phone, website, professional sector, and canonical Entia Home URL. This is official registry data — not scraped from Google Maps or estimated.
| Name | Required | Description | Default |
|---|---|---|---|
| q | Yes | Company name or partial name to search | |
| city | No | City name (e.g. Madrid, Barcelona, Valencia, Sevilla, London) | |
| limit | No | Max results (1-50) | |
| sector | No | Business sector filter: dental, legal, estetica, psicologia, talleres, veterinarios, reformas, inmobiliarias, asesorias, gimnasios, medicos, arquitectos, farmacias, opticas, fisioterapia, podologia, logopedia, nutricion, enfermeria... | |
| country | No | ISO country code | ES |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds context beyond annotations: data is official registry, covers 5.5M entities, and includes specific sectors. Discloses data source and depth, though doesn't mention rate limits or auth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear opening, examples, statistics, and result details. Every sentence serves a purpose without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers purpose, usage, data source, result contents, and coverage. With an output schema present, no need to explain return format further.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. Description enhances by providing example cities and explaining sectors in more detail, adding value beyond the schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool searches the verified business registry of Spain and 33 other countries. Provides specific examples like 'find me a dentist in Madrid' and distinguishes from general Google Maps data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Describes when to use: when a user needs a list of real, verified businesses by sector and location. Offers concrete query examples. Does not explicitly exclude cases but covers common scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
verify_vatVerify EU VATARead-onlyIdempotentInspect
Validate any European VAT number in real-time against the official EU VIES system. Use when a user asks 'is this VAT number valid?', 'verify this company's tax ID', 'check if this EU company is registered', or needs VAT validation for invoicing, compliance, or KYC.
Real-time query to the European Commission's VIES database — the same system used by tax authorities across all 27 EU member states.
Returns: valid/invalid status, official registered company name, and registered address as recorded by the national tax authority. This is the definitive answer — not an estimate or cache.
Note on disclosure: Spain (AEAT), Germany, and a few other EU member states do NOT disclose
company name/address via VIES for privacy reasons. In those cases status=valid confirms
the VAT is registered, but name and address return disclosure_restricted with a
pointer to the authoritative registry (BORME for Spain).
| Name | Required | Description | Default |
|---|---|---|---|
| cif | No | Alias for vat_id — pass Spanish CIF here (with or without ES prefix) | |
| vat | No | Alias for vat_id — pass VAT here if your client serializes 'vat' | |
| vat_id | No | EU VAT number with country prefix (e.g. ESB80988678, FR12345678901, DE123456789) |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint, idempotentHint, etc. The description adds significant behavioral context: real-time query to official database, definitive answer (not cached), and limitations on disclosure for certain countries.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with a clear opening, usage scenarios, source note, and limitations. Every sentence serves a purpose; no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema, the description covers all essential aspects: source, real-time nature, definitive answer, and disclosure restrictions. Adequate for a tool with 3 optional parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions. The description adds value by explaining the aliases (cif, vat) and the expected format for vat_id (country prefix).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('validate') and resource ('European VAT number') against the official VIES system. It distinguishes from sibling tools (none are VAT-related) and provides a specific verb+resource.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly lists when to use with example user queries and use cases (invoicing, compliance, KYC). Lacks explicit when-not-to-use or alternatives among siblings, but the context makes it clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
zone_profileZone Socioeconomic ProfileARead-onlyIdempotentInspect
Get the complete socioeconomic profile for any Spanish postal code — income, unemployment, population, and business density. Use when a user asks 'what's the economic level of this area?', 'is this a good zone to open a business?', 'what's the average income in 28001?', 'how many businesses are in this postal code?', or needs economic context for real estate, investment, or market analysis.
Covers all 11,241 Spanish postal codes with data from 4 official sources:
AEAT (Agencia Tributaria): gross/average annual income, net monthly salary, tax declarations, social security contributions
SEPE (Servicio Publico de Empleo): registered unemployment, unemployment-to-declarations ratio
INE (Padron Municipal 2025): population by municipality and sex
INE (DIRCE): business count by CNAE sector
ENTIA computes: Economic Capacity Index ICE (1-10) and Economic Segment (BAJO / MEDIO / ALTO / PREMIUM).
This is the most granular open economic data available for Spain at postal code level.
| Name | Required | Description | Default |
|---|---|---|---|
| cp | No | Alias for postal_code — pass here if your client serializes 'cp' instead of 'postal_code' | |
| postal_code | No | Spanish postal code (5 digits, e.g. 28001, 08001, 41001) |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false. The description adds value by detailing data sources (AEAT, SEPE, INE), coverage (all 11,241 Spanish postal codes), and computed indices (ICE index, Economic Segment), which are beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is informative but slightly lengthy. It effectively front-loads the purpose and use cases, then lists data sources and computed indices in a structured way. Every sentence adds value, but it could be trimmed slightly without losing substance.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema (return type unknown but declared), the description covers all necessary aspects: purpose, usage, parameters, data sources, coverage, and computed fields. It is fully self-contained for an agent to invoke the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for both parameters. The description further clarifies the 'cp' alias for clients that serialize 'cp' instead of 'postal_code' and provides example codes. This adds practical context beyond the schema's basic description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a complete socioeconomic profile for Spanish postal codes, listing specific data points (income, unemployment, population, business density) and distinguishing it from siblings which cover different entities (e.g., entity_lookup, borme_lookup). The verb 'Get' plus resource 'socioeconomic profile' is precise.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit example queries (e.g., 'what's the economic level of this area?', 'is this a good zone to open a business?') and contexts (real estate, investment, market analysis). However, it does not explicitly mention when not to use it or point to alternative sibling tools for other use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!