Skip to main content
Glama

ENTIA Entity Verification

Server Details

Verify any business in 34 countries. BORME 40M+ acts, VIES, GLEIF, Wikidata. Free 100/day.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
ENTIA-IA/entia-mcp-server
GitHub Stars
0
Server Listing
ENTIA Entity Verification

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.5/5 across 6 of 6 tools scored.

Server CoherenceA
Disambiguation3/5

There is overlap between entity_lookup and search_entities as both allow searching by name, which could cause confusion. However, other tools are distinct in purpose.

Naming Consistency3/5

Most tools use verb_noun pattern (get_entia_home, get_platform_stats, run_risk_audit, search_entities), but entity_lookup uses noun_verb and lookup_by_domain uses a longer phrase, breaking consistency.

Tool Count5/5

With 6 tools covering lookup, search, platform stats, domain lookup, and risk audit, the count is well-scoped for an entity verification server.

Completeness4/5

The set covers key verification operations, but the domain lookup tool is not yet functional, and there is no batch or comparison tool. Still, core workflows are supported.

Available Tools

6 tools
entity_lookupAInspect

Look up any business entity by name, CIF/NIF, EU VAT ID, or LEI code. Returns identity data, trust score (VERIFIED/PARTIAL/UNVERIFIED), and cross-verification against BORME, VIES, GLEIF, and OFAC. Covers 5.5M+ registered entities across 34 countries. Enrichment depth varies: ES has full socioeconomic data, GB/FR have name+address only. Check the data_coverage field in the response to see exactly what is populated. No API key required.

ParametersJSON Schema
NameRequiredDescriptionDefault
qYesCompany name, CIF/NIF (e.g. B82846825), EU VAT ID (e.g. ESB82846825), or LEI code (20 chars). The API auto-detects the input type.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully discloses behavior: returns identity data, trust score (VERIFIED/PARTIAL/UNVERIFIED), cross-verification against BORME, VIES, GLEIF, OFAC. It transparently notes enrichment depth varies by country and advises checking the data_coverage field. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences covering purpose, return data, coverage, limitations, and guidance. No wasted words; front-loaded with the core action. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (multiple identifier types, cross-references, country-specific depth) and no output schema, the description explains return fields, coverage scope, and how to interpret results. It is complete enough for an agent to use correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The single parameter 'q' has full schema coverage, but the description adds significant value: explains accepted input types (name, CIF/NIF, EU VAT ID, LEI), provides examples, and states auto-detection. This goes beyond the schema's description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Look up any business entity by name, CIF/NIF, EU VAT ID, or LEI code.' It specifies the resource and multiple lookup methods, distinguishing it from siblings like search_entities (broader search) and lookup_by_domain (domain-based). The verb 'look up' is specific and actionable.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly tells the agent when to use this tool (when you have a specific entity name or identifier). It provides coverage scope ('34 countries') and examples. However, it does not explicitly contrast with siblings like search_entities or lookup_by_domain, nor does it state when not to use it. The context makes it clear enough.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_entia_homeAInspect

Retrieve the full Schema.org JSON-LD @graph for a registered entity's Entia Home page. Returns up to 4 nodes: (1) WebPage canonical metadata, (2) Entity identity with address, geo, identifiers, and official sources, (3) Verification Report with HMAC signature and per-source confidence levels, (4) Territorial socioeconomic profile (ES only: INE/SEPE/Hacienda). Not all entities have an Entia Home — only ~500K published pages exist. Use search_entities first if you do not know the exact path. No API key required.

ParametersJSON Schema
NameRequiredDescriptionDefault
cityYesCity slug, lowercase with hyphens (e.g. "madrid", "barcelona", "london")
slugYesURL-safe business name slug (e.g. "clinica-dental-sonrisa")
sectorYesIndustry slug (e.g. "dental", "legal", "talleres", "estetica", "inmobiliarias")
countryYesISO 3166-1 alpha-2 country code, lowercase (e.g. "es", "gb", "fr")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden for behavioral disclosure. It explains the tool returns up to 4 nodes with specific content, mentions ES-only data, and states no API key is required. However, it does not specify the response format in case the entity does not exist (e.g., empty result or error), which is a minor gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single dense paragraph that efficiently packs purpose, output details, limitations, and usage guidance. It is not overly long, though it could be slightly more structured with bullet points for the four nodes.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the input schema fully covers parameters and there is no output schema, the description adequately explains the output structure (four nodes) and their contents, plus constraints. It lacks details on error handling or exact return format, but overall is complete for a retrieval tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with each parameter already having a clear description in the schema. The description adds context by tying the four parameters together as a path and listing them, but does not provide significant additional meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves the full Schema.org JSON-LD @graph for a registered entity's Entia Home page. It lists the four specific nodes returned, which aligns with the required path parameters. It also distinguishes from the sibling `search_entities` tool by advising to use that if the exact path is unknown.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly tells when to use this tool: when you have the exact path (country, sector, city, slug). It provides guidance on when not to use it: if the path is unknown, use `search_entities` first. It also notes that only ~500K published pages exist, setting correct expectations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_platform_statsAInspect

Get real-time ENTIA platform statistics: total registered entities, country coverage, active data sources, and published Entia Homes. Note: total_entities is the full registry; only ~79K pass the Quality Gate for full publication. Cached 1h server-side. No API key required.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, but the description fully covers behavioral traits: 1-hour server-side caching, no API key required, and clarification that total_entities includes unqualified entries. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two clear, front-loaded sentences with no superfluous information. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero parameters and no output schema, the description adequately covers the tool's purpose, caching, authentication, and data interpretation. Complete for a straightforward stats tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist (empty input schema, 100% coverage baseline). Description adds meaning by listing the specific statistics returned, which compensates for the absence of params.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves ENTIA platform statistics and lists specific metrics (total entities, country coverage, etc.). It distinguishes from sibling tools like entity_lookup and search_entities which focus on individual lookups.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains the tool provides real-time stats (with cache) but does not explicitly state when to use it versus siblings or offer exclusions. Usage context is implied but not definitive.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lookup_by_domainAInspect

Identify the business entity associated with a website domain. STATUS: Coming in v1.1 — currently returns 501. Workaround: use entity_lookup with company name, or search_entities with the domain.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesDomain name to look up (e.g. "example.com" or "www.example.com"). The API normalizes the domain automatically.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Since no annotations are provided, the description carries full burden of behavioral disclosure. It reveals the tool is not yet functional (returns 501) and automatically normalizes domains. It does not describe the expected return value when functional, but given the current status, this is adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences: first states the core purpose, second conveys status and workarounds. No redundant or extraneous content; every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's single parameter, lack of output schema, and non-functional status, the description informs about purpose, state, and alternatives. It could mention the expected return type upon implementation, but is otherwise complete for current usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with a single parameter 'domain' described in detail (examples, normalization). The description adds context ('business entity') but does not provide additional parameter semantics beyond the schema; thus baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Identify the business entity associated with a website domain.' It distinguishes itself from siblings like entity_lookup and search_entities by explicitly mentioning them as alternatives, showing differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly notes that the tool is 'Coming in v1.1 — currently returns 501,' advising against current use. It provides concrete workarounds: 'use entity_lookup with company name, or search_entities with the domain.' This covers when-not-to-use and alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

run_risk_auditAInspect

Run an AI-readiness and digital risk audit on any domain. Checks SSL, DNS, structured data presence, and LLM visibility signals. Returns a risk score 0-100 (lower is better, >60 means action recommended) with specific gaps identified. Slow operation (up to 30s). API key required. Rate limit: 5/min.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameNoOptional business name for context
domainYesDomain to audit (e.g. "clinicadental.es", "example.com")
sector_idNoOptional sector hint for more accurate scoring. Examples: dental, legal, talleres, estetica, inmobiliarias
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Since no annotations are provided, the description carries full disclosure burden. It discloses the operation is slow, requires authentication, and has rate limits. It also describes the output (risk score, gaps). However, it does not explicitly state that the operation is read-only and non-destructive, though the term 'audit' implies it. Missing idempotency details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise: four sentences with no fluff. It front-loads the core purpose, then details checks, then output interpretation, then operational warnings. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with three parameters and no output schema, the description is highly complete. It explains what the tool checks, what the score means (with actionable threshold), and operational constraints. It covers input, output, and behavior without leaving major gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so the schema already describes each parameter. The description adds value by providing examples for domain and sector_id, and by explaining that the audit checks SSL, DNS, etc., which gives context to the parameters. This extra context justifies a score above baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's function: running an AI-readiness and digital risk audit on a domain. It lists specific checks (SSL, DNS, structured data, LLM visibility) and output (risk score 0-100 with gaps). This distinguishes it from sibling tools like entity_lookup or search_entities, which are search/lookup tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides important usage context: it is a slow operation (up to 30s), requires an API key, and has a rate limit of 5/min. This helps the agent decide when to call it. However, it does not explicitly mention when not to use it or compare it to siblings, leaving some ambiguity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_entitiesAInspect

Search 5.5M+ registered entities across 34 countries by name, keyword, country, or sector. Coverage varies by country: ES ~900K enriched with full contact and socioeconomic data, GB/FR name+address only, GLEIF countries name+LEI only. Check data_coverage in results to understand what fields are populated. Use this to find entities before calling get_entia_home. API key required.

ParametersJSON Schema
NameRequiredDescriptionDefault
qYesSearch query — company name or keywords
limitNoMax results (default 10, max 50)
sectorNoSector filter. Examples: dental, legal, talleres, estetica, inmobiliarias, hosteleria, reformas, veterinarios, asesorias, gimnasios, psicologia, and 24+ more
countryNoISO country code filter (e.g. "es", "gb", "fr")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It discloses varying data coverage by country, the need to check data_coverage in results, and an API key requirement. Missing details like rate limits, pagination, or error handling, but the provided behavioral context is substantial.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences efficiently cover purpose, coverage nuances, usage guidance, and authentication. No redundant or vague statements; every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description mentions data_coverage but not other result fields. It provides good coverage of search scope and pre-usage guidance (use before get_entia_home). Missing output structure hint, but the tool's primary purpose is clear enough for selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description restates parameters briefly but adds little beyond schema descriptions (e.g., 'by name, keyword, country, or sector'). The country coverage nuance is helpful but not parameter-specific.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's action: 'Search 5.5M+ registered entities across 34 countries by name, keyword, country, or sector.' It distinguishes itself from sibling tools by referencing get_entia_home and providing country-specific coverage details.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description advises to 'Use this to find entities before calling get_entia_home,' providing a clear when-to-use context. However, it does not explicitly exclude other siblings like entity_lookup or lookup_by_domain, nor does it specify when not to use this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.