Catalunya 2022
Server Details
Consulta el pla estratègic Catalunya 2022: 3 àmbits, 12 objectius, 91 accions. Trilingüe CA/EN/ES.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.4/5 across 4 of 4 tools scored.
Each tool has a clearly distinct purpose: get_document_metadata retrieves the document structure, get_section fetches specific content by slug, list_proposals enumerates actions with filtering, and search_document performs keyword searches. There is no overlap in functionality, making tool selection straightforward for an agent.
All tool names follow a consistent verb_noun pattern (get_document_metadata, get_section, list_proposals, search_document) using snake_case throughout. The naming convention is predictable and readable, with no deviations in style or structure.
With 4 tools, the server is well-scoped for its purpose of navigating and querying a policy document. Each tool serves a unique and essential function, covering metadata retrieval, content access, listing, and searching without being overly sparse or bloated.
The tool set provides complete coverage for exploring the policy document: get_document_metadata gives the overall structure, get_section allows detailed content access, list_proposals enables filtered browsing, and search_document supports keyword queries. There are no obvious gaps for the intended domain of document navigation and information retrieval.
Available Tools
4 toolsget_document_metadataARead-onlyIdempotentInspect
Get the complete structure of the Catalunya 2022: RESET policy document — 3 spheres, 12 goals, 91 actions created by a 30-expert Catalonia Task Force. Returns the hierarchy with canonical slugs for navigation via get_section.
| Name | Required | Description | Default |
|---|---|---|---|
| locale | No | Content language. Defaults to 'ca' (Catalan), the original language of the document. (ca=Catalan, en=English, es=Spanish) | ca |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond what annotations provide: it specifies the exact document being retrieved (Catalunya 2022: RESET policy document) and describes the return format (hierarchy with canonical slugs). While annotations already declare this as read-only, non-destructive, idempotent, and closed-world, the description provides concrete details about what data is returned and how it's structured.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise and front-loaded: a single sentence efficiently conveys the tool's purpose, the specific document it retrieves, the return format, and the relationship to a sibling tool. Every element serves a clear purpose with zero wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simple nature (single optional parameter, read-only operation with comprehensive annotations), the description provides complete context. It explains what document is retrieved, what structure is returned, and how the output relates to other tools. The lack of output schema is compensated by the clear description of the return format (hierarchy with canonical slugs).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already fully documents the single 'locale' parameter with its enum values and default. The description doesn't add any parameter-specific information beyond what's in the schema, but the baseline score of 3 is appropriate when the schema provides complete parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get the complete structure') and identifies the exact resource ('Catalunya 2022: RESET policy document'), including detailed characteristics (3 spheres, 12 goals, 91 actions, created by 30-expert Catalonia Task Force). It explicitly distinguishes this tool from its sibling 'get_section' by mentioning canonical slugs for navigation to that tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool versus alternatives: it specifies that this tool returns the complete document hierarchy, and that the canonical slugs it provides are intended for navigation via 'get_section' (a named sibling tool). This clearly establishes the relationship between this tool and its sibling.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sectionARead-onlyIdempotentInspect
Retrieve the full text of any section of the Catalunya 2022 document by its canonical slug. Slugs follow the pattern: 'sphere-1', 'sphere-1/goal-2', 'sphere-1/goal-2/action-2-1'. Static pages: 'introduction', 'executive-summary', 'train-of-prosperity'. Use get_document_metadata to discover all available slugs.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | Canonical section slug | |
| locale | No | Content language. Defaults to 'ca' (Catalan), the original language of the document. (ca=Catalan, en=English, es=Spanish) | ca |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds context about slug patterns and static pages, which is useful beyond annotations, but doesn't mention rate limits or authentication needs. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by slug patterns and a clear alternative tool mention. Every sentence adds value without waste, making it efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity, rich annotations, and full schema coverage, the description is mostly complete. It lacks output schema details, but the purpose and usage are well-covered. A minor gap exists in not explicitly stating return format, but it's adequate for this context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents both parameters. The description adds meaning by explaining slug patterns and static page examples, but this doesn't significantly enhance the schema's details. Baseline 3 is appropriate as the schema carries the burden.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Retrieve the full text') and resource ('any section of the Catalunya 2022 document'), specifying it's by canonical slug. It distinguishes from sibling get_document_metadata by mentioning that tool for discovering slugs, making the purpose specific and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It explicitly states when to use this tool (by canonical slug) and when to use an alternative (get_document_metadata to discover slugs). It also lists static page slugs as examples, providing clear context for usage versus other siblings like list_proposals or search_document.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_proposalsARead-onlyIdempotentInspect
List all 91 action proposals from the Catalunya 2022 document, optionally filtered by sphere (1-3) or goal (1-12). Returns actionId, goalId, sphereId, title, slug, and url. Use get_section with the returned slug to read full action content.
| Name | Required | Description | Default |
|---|---|---|---|
| goalId | No | Filter by goal (1-12) | |
| locale | No | Content language. Defaults to 'ca' (Catalan), the original language of the document. (ca=Catalan, en=English, es=Spanish) | ca |
| sphereId | No | Filter by sphere (1-3) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare this as a safe, read-only, idempotent operation (readOnlyHint=true, destructiveHint=false, idempotentHint=true). The description adds valuable context beyond annotations by specifying the exact number of items (91), the return fields (actionId, goalId, sphereId, title, slug, url), and the relationship with get_section. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: the first states purpose and filtering options, the second explains return values and relationship with another tool. Every sentence adds essential information with zero waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (list operation with full schema coverage and comprehensive annotations), the description is complete. It covers purpose, usage, return fields, and tool relationships. No output schema exists, but the description adequately specifies return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all three parameters. The description adds minimal value beyond the schema by mentioning optional filtering by sphere or goal, but doesn't provide additional syntax or format details. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('List') and resource ('all 91 action proposals from the Catalunya 2022 document'), specifying the exact dataset scope. It distinguishes from sibling tools by mentioning get_section for full content, implying this tool provides a summary list rather than detailed content or metadata.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('List all 91 action proposals') and when to use an alternative ('Use get_section with the returned slug to read full action content'), providing clear guidance on tool selection. It also mentions optional filtering parameters, indicating usage contexts.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_documentARead-onlyIdempotentInspect
Search the Catalunya 2022: RESET policy document by keyword. Returns up to 10 results with canonical slugs (for follow-up with get_section) and text snippets. Handles Catalan/Spanish diacritics automatically (e.g., 'educacio' matches 'educació').
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Search query (e.g., 'housing', 'educacio', 'digital transformation'). Use terms in the target locale for best results. | |
| scope | No | Filter by section type: 'action' (91 proposals), 'goal' (12 overviews), 'sphere' (3 overviews), or 'static' (introduction, executive summary, train of prosperity) | |
| locale | No | Content language. Defaults to 'ca' (Catalan), the original language of the document. (ca=Catalan, en=English, es=Spanish) | ca |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it specifies the result limit (up to 10 results), describes the return format (canonical slugs and text snippets), and explains diacritic handling. Annotations cover read-only, non-destructive, and idempotent properties, but the description complements them with practical implementation details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: the first states purpose and key behaviors, the second adds important technical details. Every element (result limit, slug usage, diacritic handling) serves a clear purpose without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a search tool with comprehensive annotations and schema coverage, the description provides good context about behavior and output format. The main gap is the lack of an output schema, but the description partially compensates by describing the return structure. It could be more complete with explicit error handling or pagination details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already documents all parameters thoroughly. The description doesn't add significant parameter-specific semantics beyond what's in the schema, so it meets the baseline of 3 for adequate but not enhanced parameter explanation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search'), target resource ('Catalunya 2022: RESET policy document'), and method ('by keyword'). It distinguishes from siblings by mentioning canonical slugs for follow-up with get_section, differentiating it from get_document_metadata (metadata retrieval) and list_proposals (listing without search).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool (keyword search of a specific document) and implicitly suggests alternatives by mentioning get_section for follow-up. However, it doesn't explicitly state when NOT to use it or directly compare with sibling tools like list_proposals for broader listing needs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!