ENTIA — 5.5M Verified Entities for AI Agents
Server Details
20 tools: entity lookup, BORME, EU VAT, GLEIF, healthcare registries, economic data. 34 countries.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 6 of 6 tools scored.
Most tools have distinct purposes: borme_lookup focuses on Spanish mercantile acts, entity_lookup provides comprehensive entity verification, get_competitors identifies competitors, search_entities offers general entity search, verify_vat handles VAT validation, and zone_profile delivers socioeconomic data. However, entity_lookup and search_entities could potentially overlap in entity search functionality, though their descriptions emphasize different aspects (verification vs. general search).
The tools follow a consistent snake_case naming convention throughout. Most names clearly indicate their function with a verb-noun pattern (e.g., entity_lookup, verify_vat, zone_profile), though borme_lookup uses an acronym as the noun and get_competitors uses 'get' instead of a more descriptive verb like 'find', which is a minor deviation from perfect consistency.
With 6 tools, this server is well-scoped for its purpose of providing verified entity and socioeconomic data. Each tool addresses a specific, valuable use case within the domain, such as entity verification, competitive analysis, VAT validation, and zone profiling. The count is neither too sparse nor bloated, allowing for focused functionality without overwhelming complexity.
The toolset covers key aspects of entity data and socioeconomic profiling, including lookup, search, verification, competitive analysis, and zone data. It integrates multiple official sources like BORME, VIES, and Spanish government agencies. A minor gap is the lack of tools for updating or managing entity data (e.g., add_entity or update_entity), but this may be intentional if the server is read-only. Core workflows for data retrieval and analysis are well-supported.
Available Tools
6 toolsborme_lookupBORME LookupARead-onlyIdempotentInspect
Access the complete corporate history of any Spanish company from the BORME (Boletin Oficial del Registro Mercantil). Use when a user asks 'who are the directors of this company?', 'when was this company founded?', 'has this company had any capital changes?', 'what is this company's corporate purpose?', or needs official mercantile records for due diligence.
40.3 million official mercantile acts from 3.4 million unique Spanish companies, covering 2009-2026.
Returns: company name, CIF, province, act types (constitucion, nombramientos, ceses, ampliaciones de capital, reducciones, disoluciones, fusiones, escisiones), objeto social, CNAE code, and BORME publication dates.
Source: Registro Mercantil de Espana — official government gazette data extracted from BORME PDFs. Legally authoritative.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max mercantile acts to return (1-50) | |
| query | Yes | Spanish CIF (e.g. B80988678) or company name |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond what annotations provide. While annotations indicate read-only, non-destructive, and idempotent operations, the description specifies the data source (Registro Mercantil de España via BORME PDF extraction), emphasizes it's official government data (not scraped or estimated), and details the temporal coverage (2009-2026). This provides important context about data quality and limitations that annotations alone don't convey.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly structured and concise. The first sentence establishes purpose and scope, the second details return values, and the final sentences provide crucial context about data source and quality. Every sentence earns its place with zero wasted words, and key information is front-loaded appropriately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity and the presence of both comprehensive annotations and an output schema, the description provides excellent contextual completeness. It covers data scope, source, quality, temporal coverage, and return values, while annotations handle safety/behavioral aspects and the output schema will document return structure. No significant gaps exist for an agent to understand and use this tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already fully documents both parameters (query and limit). The description doesn't add any parameter-specific information beyond what's in the schema, such as clarifying query format examples or explaining the implications of the limit parameter. The baseline score of 3 is appropriate when the schema does all the parameter documentation work.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches Spain's BORME registry with specific scope (40.3 million mercantile acts from 3.4 million companies, 2009-2026) and distinguishes it from siblings by specifying it returns official government data from BORME PDF extraction, not scraped or estimated data. The verb 'search' and resource 'Spain's BORME' are specific and well-defined.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool by specifying it searches official Spanish mercantile registry data, which implicitly distinguishes it from sibling tools like 'verify_vat' or 'entity_lookup'. However, it doesn't explicitly state when NOT to use this tool or name specific alternatives among the siblings for different use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
entity_lookupEntity LookupARead-onlyIdempotentInspect
Retrieve the verified identity profile of any business across 34 countries (5.5M entities). Use this when a user asks 'is this company legit?', 'who is behind this company?', 'check this CIF/VAT/LEI', or needs due diligence on any business.
Cross-references 5 authoritative sources in parallel:
ENTIA Registry (5.5M verified entities)
BORME / Registro Mercantil (40.3M official acts, 2009-2026)
VIES (EU VAT validation, 27 member states)
GLEIF (LEI global identifiers)
Wikidata (structured knowledge graph)
Returns: legal name, address, trust score (0-100), verification status, data coverage percentage, full source chain with timestamps, and BORME corporate history.
Accepts: CIF (B80988678), EU VAT with prefix (ESB80988678, FR12345678901), LEI (20-char alphanumeric), or company name.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | CIF (e.g. B80988678), EU VAT (e.g. ESB80988678, FR12345678901), LEI (20 alphanumeric chars), or company name | |
| country | No | ISO country code (ES, GB, FR...). Auto-detected from VAT prefix if not provided |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, openWorldHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable context beyond annotations: it details the parallel search across multiple sources (BigQuery, BORME, VIES, GLEIF, Wikidata) and specifies the return data structure (trust score, verification status, etc.), which helps the agent understand the tool's behavior and output richness.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by details on data sources and returns in a structured manner. Every sentence adds value: the first sets scope, the second explains sources, and the third specifies outputs. No wasted words, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (multiple data sources, rich returns), the description provides complete context: it covers the registry scope, search methodology, and detailed return values. With annotations covering behavioral traits and an output schema presumably detailing the return structure, the description fills in necessary gaps without redundancy, making it fully adequate for agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents the 'query' and 'country' parameters. The description does not add meaning beyond the schema, such as explaining parameter interactions or usage examples. Baseline 3 is appropriate as the schema handles parameter documentation adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Look up a business entity') and resource ('ENTIA's verified registry of 5.5M entities across 34 countries'), distinguishing it from siblings like 'borme_lookup' (focused on BORME acts) and 'verify_vat' (focused on VAT verification). It explicitly mentions the data sources and scope, making the purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for business entity lookups with rich data returns, but does not explicitly state when to use this tool versus alternatives like 'search_entities' or 'verify_vat'. It provides context on data sources and returns, which helps infer usage, but lacks explicit guidance on exclusions or direct comparisons to sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_competitorsGet CompetitorsARead-onlyIdempotentInspect
Find all verified businesses competing in the same sector and city. Use when a user asks 'who are my competitors?', 'how many dentists are in Valencia?', 'list all law firms in Bilbao', 'what's the market density for gyms in Madrid?', or needs competitive intelligence for any sector and location.
Searches ENTIA's verified registry (5.5M entities, 26 sectors in Spain) for businesses matching the sector + city combination.
Returns for each competitor: legal name, full address, phone, website, rating, and canonical Entia Home URL for the full verified profile.
Useful for: competitive analysis, market entry assessment, franchise territory planning, zone saturation mapping, and local SEO benchmarking.
| Name | Required | Description | Default |
|---|---|---|---|
| city | Yes | City name (e.g. Madrid, Barcelona, Valencia, Sevilla, Bilbao, Malaga) | |
| limit | No | Max results (1-30) | |
| sector | Yes | Business sector: dental, legal, estetica, psicologia, talleres, veterinarios, reformas, inmobiliarias, asesorias, gimnasios, medicos, arquitectos, farmacias, opticas, fisioterapia, podologia, logopedia, nutricion, enfermeria... | |
| country | No | ISO country code | ES |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, and idempotentHint=true, so the agent knows this is a safe, repeatable read operation. The description adds useful context about the data source ('ENTIA's verified registry') and return format details, but doesn't mention rate limits, authentication needs, or pagination behavior beyond the limit parameter.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three well-structured sentences with zero waste: first states purpose, second explains mechanics and returns, third provides use cases. Every sentence earns its place by adding distinct value, and the description is appropriately sized for this tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the comprehensive annotations (readOnlyHint, idempotentHint), 100% schema coverage, and existence of an output schema, the description provides complete context. It explains what the tool does, the data source, return format, and use cases - everything needed beyond the structured fields.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents all 4 parameters thoroughly. The description adds marginal value by implying the geographic scope ('city' and potentially 'country') and sector filtering, but doesn't provide additional syntax or format details beyond what's in the schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Find competing businesses') with the resource ('ENTIA's verified registry') and geographic scope ('in a geographic zone by sector'). It distinguishes from siblings by specifying it searches for competitors rather than general entity lookups or verification tasks.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('competitive analysis, market density assessment, and zone mapping'), but doesn't explicitly mention when NOT to use it or name specific alternatives among the sibling tools. The use cases are helpful but not exhaustive in guiding tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_entitiesSearch EntitiesARead-onlyIdempotentInspect
Search the full verified business registry of Spain and 33 other countries. Use when a user asks 'find me a dentist in Madrid', 'list law firms in Barcelona', 'what veterinary clinics are in Sevilla', or any query that needs a list of real, verified businesses by sector and location.
Covers 5.5M entities. Spain has the deepest coverage: 1.4M entities across 26 professional sectors including dental, legal, medical, veterinary, psychology, real estate, automotive repair, aesthetics, accounting, gyms, and more.
Every result includes: verified legal name, full address, phone, website, professional sector, and canonical Entia Home URL. This is official registry data — not scraped from Google Maps or estimated.
| Name | Required | Description | Default |
|---|---|---|---|
| q | Yes | Company name or partial name to search | |
| city | No | City name (e.g. Madrid, Barcelona, Valencia, Sevilla, London) | |
| limit | No | Max results (1-50) | |
| sector | No | Business sector filter: dental, legal, estetica, psicologia, talleres, veterinarios, reformas, inmobiliarias, asesorias, gimnasios, medicos, arquitectos, farmacias, opticas, fisioterapia, podologia, logopedia, nutricion, enfermeria... | |
| country | No | ISO country code | ES |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false. The description adds valuable behavioral context beyond annotations: it specifies the registry is 'verified,' describes the return fields in detail, provides coverage statistics (5.5M entities, 34 countries, Spain coverage details), and mentions verification status - all helpful for understanding the tool's behavior and limitations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly structured and concise: first sentence states purpose and key parameters, second sentence details return fields, third sentence provides scope and coverage statistics. Every sentence adds essential information with zero waste, and it's front-loaded with the most important information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has comprehensive annotations (readOnly, idempotent, etc.), 100% schema coverage, and an output schema exists, the description provides excellent contextual completeness. It covers purpose, return format, coverage scope, and verification aspects - everything needed to understand the tool's value and limitations without duplicating structured data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, all parameters are well-documented in the schema. The description adds some semantic context by mentioning search by 'name, sector, and location' which aligns with q, sector, and city/country parameters, and provides examples (Spain coverage, sector examples like dental/legal), but doesn't add significant parameter-specific details beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches ENTIA's verified entity registry by specific criteria (name, sector, location), distinguishes it from siblings like borme_lookup and entity_lookup by specifying it's for searching rather than looking up individual entities, and provides concrete scope information (5.5M entities across 34 countries).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool (searching by name, sector, location) and what it returns, but doesn't explicitly state when NOT to use it or name specific alternatives among the sibling tools. The scope information (deepest coverage in Spain) provides helpful contextual guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
verify_vatVerify EU VATARead-onlyIdempotentInspect
Validate any European VAT number in real-time against the official EU VIES system. Use when a user asks 'is this VAT number valid?', 'verify this company's tax ID', 'check if this EU company is registered', or needs VAT validation for invoicing, compliance, or KYC.
Real-time query to the European Commission's VIES database — the same system used by tax authorities across all 27 EU member states.
Returns: valid/invalid status, official registered company name, and registered address as recorded by the national tax authority. This is the definitive answer — not an estimate or cache.
| Name | Required | Description | Default |
|---|---|---|---|
| vat_id | Yes | EU VAT number with country prefix (e.g. ESB80988678, FR12345678901, DE123456789) |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it specifies the real-time nature of the check, mentions the VIES database source, and describes the return format (valid/invalid status, company name, address). While annotations cover safety (readOnlyHint, destructiveHint), the description provides operational details that help the agent understand what to expect.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with four focused sentences that each add value: states purpose, specifies data source, lists return values, and clarifies scope. No redundant information or wasted words, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, comprehensive annotations, complete schema coverage, and presence of an output schema, the description provides excellent contextual completeness. It covers purpose, data source, return format, and geographical scope without needing to repeat what structured fields already provide.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already fully documents the single parameter. The description doesn't add parameter-specific information beyond what's in the schema, though it reinforces the EU VAT context. This meets the baseline expectation when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('validate a European VAT number'), resource ('against VIES'), and scope ('covers all 27 EU member states'). It distinguishes this tool from sibling tools like 'entity_lookup' or 'search_entities' by focusing specifically on VAT validation rather than general entity searches.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('validate a European VAT number'), but doesn't explicitly state when not to use it or mention alternatives. It doesn't compare with sibling tools like 'entity_lookup' that might handle broader entity searches, leaving some guidance gaps.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
zone_profileZone Socioeconomic ProfileARead-onlyIdempotentInspect
Get the complete socioeconomic profile for any Spanish postal code — income, unemployment, population, and business density. Use when a user asks 'what's the economic level of this area?', 'is this a good zone to open a business?', 'what's the average income in 28001?', 'how many businesses are in this postal code?', or needs economic context for real estate, investment, or market analysis.
Covers all 11,241 Spanish postal codes with data from 4 official sources:
AEAT (Agencia Tributaria): gross/average annual income, net monthly salary, tax declarations, social security contributions
SEPE (Servicio Publico de Empleo): registered unemployment, unemployment-to-declarations ratio
INE (Padron Municipal 2025): population by municipality and sex
INE (DIRCE): business count by CNAE sector
ENTIA computes: Economic Capacity Index ICE (1-10) and Economic Segment (BAJO / MEDIO / ALTO / PREMIUM).
This is the most granular open economic data available for Spain at postal code level.
| Name | Required | Description | Default |
|---|---|---|---|
| postal_code | Yes | Spanish postal code (5 digits, e.g. 28001, 08001, 41001) |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, non-destructive, and idempotent behavior, which the description doesn't contradict. The description adds valuable context beyond annotations by detailing the specific data sources and computed metrics returned, enhancing transparency about what information the tool provides without repeating structured safety hints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded with the core purpose, followed by organized bullet points of data sources and metrics. Every sentence earns its place by efficiently conveying comprehensive information without redundancy, making it easy to scan and understand.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (multiple data sources and computed metrics), the description is complete. It thoroughly lists all returned data, and since an output schema exists, it doesn't need to explain return values. Combined with annotations and schema, this provides sufficient context for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the parameter 'postal_code' clearly documented in the schema. The description adds minimal semantic value beyond the schema by mentioning 'Spanish postal code (11,241 CPs covered)', which slightly enriches context but doesn't provide additional syntax or format details. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get the full socioeconomic profile for any Spanish postal code.' It specifies the verb ('Get'), resource ('socioeconomic profile'), and scope ('Spanish postal code, 11,241 CPs covered'), distinguishing it from sibling tools like entity_lookup or verify_vat that focus on different data types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying 'Spanish postal code' and listing data sources (AEAT, SEPE, INE, ENTIA), which helps identify when to use this tool. However, it lacks explicit guidance on when NOT to use it or direct alternatives among siblings, such as when other tools might be more appropriate for non-socioeconomic data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!