Skip to main content
Glama

Server Details

Connect LLMs with 2,500+ open data datasets from Catalonia (Socrata, CKAN, REST).

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
xaviviro/Opendata.cat-MCP-Server
GitHub Stars
4

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.4/5 across 7 of 7 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a distinct purpose with clear boundaries: get_dataset_info retrieves metadata, list_categories enumerates categories, list_dataset_fields shows fields, list_portals lists portals, query_dataset executes queries, related_datasets finds related datasets, and search_datasets performs searches. No overlap or ambiguity exists between these functions.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern in snake_case (e.g., get_dataset_info, list_categories, query_dataset). The naming is uniform and predictable across all seven tools.

Tool Count5/5

With 7 tools, this server is well-scoped for its purpose of accessing open data catalogs. The count is appropriate, covering discovery, metadata, querying, and relationships without being overwhelming or insufficient.

Completeness5/5

The tool surface provides complete coverage for the domain: list_portals and list_categories for discovery, search_datasets for finding datasets, get_dataset_info and list_dataset_fields for metadata, query_dataset for data retrieval, and related_datasets for exploring connections. No obvious gaps exist in the workflow.

Available Tools

7 tools
get_dataset_infoA
Read-onlyIdempotent
Inspect

Retorna totes les metadades d'un dataset: camps amb tipus i descripció, endpoint API, llicència, formats disponibles i última actualització. Crida'l després de search_datasets per obtenir detalls complets d'un dataset concret.

ParametersJSON Schema
NameRequiredDescriptionDefault
dataset_idYesIdentificador únic del dataset en format 'portal:id'. S'obté del camp dataset_id retornat per search_datasets. Exemples: 'generalitat:gn9e-3qhr', 'barcelona:qualitat-de-laire', 'diba:municipis', 'aoc:pressupostos-2024'.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotations already provide comprehensive behavioral hints (readOnly, openWorld, idempotent, non-destructive), so the bar is lower. The description adds valuable context about what metadata is returned (fields with types, API endpoint, license, formats, last update) and the workflow relationship with search_datasets, which goes beyond what annotations provide. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the purpose and enumerates specific metadata returned, the second provides clear usage guidance. Every element serves a purpose with zero wasted words, making it highly scannable and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (single parameter, read-only operation), rich annotations covering safety and behavior, and 100% schema coverage, the description is largely complete. It specifies what metadata is returned and establishes the workflow context. The main gap is the lack of output schema, but the description compensates by listing return elements. A 5 would require explicit mention of response format or error conditions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents the single parameter 'dataset_id' with format examples. The description doesn't add any parameter-specific information beyond what's in the schema, but with complete schema coverage, the baseline score of 3 is appropriate as the schema carries the full burden.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('retorna totes les metadades') and resource ('d'un dataset'), listing concrete metadata elements like fields with types, API endpoint, license, formats, and last update. It explicitly distinguishes from sibling tool 'search_datasets' by stating this should be called after that tool for detailed information on a specific dataset.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('Crida'l després de search_datasets per obtenir detalls complets d'un dataset concret'), clearly positioning it as a follow-up to a sibling tool. It establishes a workflow relationship and specifies the context (after search_datasets for complete details on a specific dataset).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_categoriesA
Read-onlyIdempotent
Inspect

Llista totes les categories i temes de datasets disponibles amb comptadors per portal. Ideal com a primer pas per descobrir quins tipus de dades hi ha abans de fer una cerca amb search_datasets. Retorna el total de datasets, recompte per portal i llista de categories amb comptadors. No requereix paràmetres.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond what annotations provide: it specifies the return format ('Retorna el total de datasets, recompte per portal i llista de categories amb comptadors') and states 'No requereix paràmetres'. While annotations cover safety and idempotency, the description adds practical usage information about outputs and parameters.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise and front-loaded: three sentences that each serve a distinct purpose (what it does, when to use it, what it returns/requires). There is zero wasted language and the structure flows logically from purpose to usage to implementation details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter read-only tool with comprehensive annotations, the description provides excellent context: it explains the purpose, usage guidelines, and return format. The only minor gap is the lack of output schema, but the description compensates by describing the return values. It's nearly complete for this tool's complexity level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema coverage, the baseline would be 4. The description explicitly states 'No requereix paràmetres', which reinforces the empty schema and adds clarity about the tool's parameterless nature.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Llista totes les categories i temes de datasets disponibles') and resource ('categories i temes de datasets'), and explicitly distinguishes it from sibling tools by mentioning 'abans de fer una cerca amb search_datasets'. It provides a complete picture of what the tool does beyond just the name.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('Ideal com a primer pas per descobrir quins tipus de dades hi ha abans de fer una cerca') and names a specific alternative ('search_datasets'). This provides clear guidance on the tool's intended context and when to choose it over siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_dataset_fieldsA
Read-onlyIdempotent
Inspect

Llista els camps d'un dataset amb el seu nom, tipus de dada i descripció. Crida'l abans de query_dataset per saber quins camps i filtres estan disponibles.

ParametersJSON Schema
NameRequiredDescriptionDefault
dataset_idYesIdentificador únic del dataset en format 'portal:id'. S'obté del camp dataset_id retornat per search_datasets.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide comprehensive behavioral hints (readOnly, openWorld, idempotent, non-destructive). The description adds valuable context about the tool's role in the workflow (precursor to query_dataset) and what information it returns, which goes beyond what annotations convey. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences that are front-loaded with the core purpose and followed by specific usage guidance. Every word serves a clear purpose with zero redundancy or unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read-only tool with comprehensive annotations and full schema coverage, the description provides excellent workflow context and purpose clarification. The only minor gap is lack of output format details (no output schema exists), but the description does specify what information will be returned (name, data type, description).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already fully documents the single required parameter dataset_id. The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Llista els camps' - lists fields), resource ('d'un dataset'), and output details ('nom, tipus de dada i descripció'). It explicitly distinguishes from sibling tool query_dataset by explaining this should be called first to understand available fields and filters.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'Crida'l abans de query_dataset per saber quins camps i filtres estan disponibles.' This tells the agent exactly when to use this tool (before query_dataset) and why (to understand available fields and filters), creating clear differentiation from sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_portalsA
Read-onlyIdempotent
Inspect

Llista els 4 portals de dades obertes catalans indexats amb el nombre de datasets de cadascun. Retorna: Generalitat de Catalunya (Socrata), Ajuntament de Barcelona (CKAN), Diputació de Barcelona (REST/CIDO) i Consorci AOC (CKAN — inclou diputacions de Tarragona, Girona, Lleida, ajuntaments i consells comarcals). No requereix paràmetres.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover read-only, open-world, idempotent, and non-destructive traits, but the description adds valuable context by specifying the exact four portals returned and their dataset counts, which goes beyond the annotations. It doesn't contradict annotations and provides operational details not captured in structured fields.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by specific details about the portals and a clear note on parameters. Every sentence adds essential information without redundancy, making it highly efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (0 parameters, no output schema) and rich annotations, the description is mostly complete. It details the portals and their dataset counts, but since there's no output schema, it could benefit from more explicit information on the return format (e.g., structure of the list).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the baseline is 4. The description reinforces this by explicitly stating 'No requereix paràmetres,' which adds clarity and confirms the absence of inputs, aligning perfectly with the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Llista els 4 portals de dades obertes catalans') and resource ('portals de dades obertes catalans'), distinguishing it from sibling tools like get_dataset_info or search_datasets which focus on datasets rather than portals. It provides concrete details about the four portals being listed.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states 'No requereix paràmetres,' which provides clear context about when to use this tool (when no parameters are available or needed). However, it doesn't explicitly mention when not to use it or name specific alternatives among the sibling tools for different scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

query_datasetA
Read-onlyIdempotent
Inspect

Consulta dades reals d'un dataset. Mira les instructions per dataset_ids destacats. Per dades municipals, usa filters: {"NOM_ENS": "Ajuntament de X"} amb els datasets aoc:ge-*.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNombre de files a retornar. Mínim 1, màxim 100. Per defecte 20. Usa offset per paginar.
offsetNoNombre de files a saltar per paginació. Per defecte 0. Combina amb limit per navegar per resultats grans.
searchNoCerca de text lliure dins les dades del dataset. Funciona amb Socrata () i CKAN (q). Per a Diba i CIDO, usa filtres específics.
filtersNoFiltres clau-valor on la clau és el nom del camp i el valor és el valor a filtrar. Exemples: {"municipi": "Barcelona"}, {"any": "2024"}, {"institucioDesenvolupat": "Ajuntament de Tiana"}. Usa list_dataset_fields per conèixer els noms de camps vàlids.
dataset_idYesIdentificador únic del dataset en format 'portal:id'. S'obté del camp dataset_id retornat per search_datasets.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it specifies that queries go directly to source portals (Socrata SoQL, CKAN datastore, etc.), supports free-text search and pagination, and mentions portal-specific behavior. While annotations cover safety (readOnlyHint: true, destructiveHint: false), the description enriches understanding of how the tool interacts with different data sources.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences: states purpose, lists capabilities (filters, search, pagination), and provides usage guidance. Every sentence adds value without redundancy, making it front-loaded and concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (5 parameters, nested objects, no output schema) and rich annotations, the description is mostly complete. It covers key behavioral aspects and usage guidance. A slight gap exists in not explicitly describing the return format (though implied as 'files de dades'), but annotations help mitigate this.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description mentions support for field filters, free-text search, and pagination, which aligns with the schema but doesn't add significant new semantic information beyond what's already in parameter descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Executa una consulta contra un dataset i retorna files de dades reals del portal origen' (execute a query against a dataset and return real data rows from the source portal). It specifies the action (query), resource (dataset), and outcome (return data rows), distinguishing it from siblings like get_dataset_info (metadata) or list_dataset_fields (field listing).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'Crida list_dataset_fields primer per conèixer els camps filtables' (call list_dataset_fields first to know the filterable fields). It also mentions alternative approaches for specific portals (e.g., for Diba and CIDO, use specific filters instead of search), helping differentiate from other query-related tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_datasetsA
Read-onlyIdempotent
Inspect

Cerca datasets per text lliure. Mira primer les instructions del servidor: molts datasets es poden consultar directament amb query_dataset sense cercar. Usa search_datasets només quan no saps quin dataset necessites.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNombre màxim de resultats a retornar. Mínim 1, màxim 100. Per defecte 20.
queryYesText de cerca en català o castellà. Exemples: 'qualitat aire', 'pressupostos municipals', 'transport públic', 'residus', 'educació'.
portalNoFiltra resultats a un sol portal.
categoryNoFiltra per categoria temàtica. Exemples: 'Medi Ambient', 'Educació', 'Salut', 'Economia', 'Territori', 'Seguretat'. Usa list_categories per veure totes les categories disponibles.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, open-world, idempotent, and non-destructive behavior. The description adds valuable context beyond annotations: it covers 7 portals, includes Catalan and Spanish synonyms, and emphasizes the need for multiple searches with different terms for broad topics. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences: purpose statement, important usage instruction with example, and behavioral detail about language coverage. Every sentence adds value with zero waste, and key guidance is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with rich annotations (readOnlyHint, openWorldHint, idempotentHint) and full schema coverage, the description provides excellent contextual completeness. It explains the multi-portal scope, language support, and strategic usage advice. The only minor gap is lack of output format details, but with no output schema, this is acceptable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so parameters are well-documented in the schema. The description adds minimal parameter semantics beyond the schema, mentioning text search in Catalan/Spanish and covering 7 portals, but doesn't provide additional syntax or format details. Baseline 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: searching Catalan open data datasets by free text across 7 portals. It specifies the verb 'cerca' (search) and resource 'datasets de dades obertes catalanes', distinguishing it from siblings like get_dataset_info (retrieve specific dataset) or list_categories (list categories).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit guidance is provided on when and how to use this tool: 'fes múltiples cerques amb termes diferents per cobrir un tema ampli' (do multiple searches with different terms to cover a broad topic), with a concrete example for 'emergències'. It also references sibling tools like list_categories for category filtering.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.