Cup24.it Availability
Server Details
Search private healthcare appointment availability across Cup24.it: services, doctors, clinics.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 4 of 4 tools scored.
Each tool has a clearly distinct purpose: city lookup, service suggestion, clinic map discovery, and detailed availability search. No overlap causes confusion.
Naming is mixed: some use 'search_', one uses 'smart_suggest_', and 'advanced_search_availability' uses a different verb. While all are in snake_case, the pattern is not uniform.
Four tools are appropriate for a booking availability service, covering the core workflow without being excessively many or too few.
The set covers city lookup, service suggestion, clinic discovery, and slot search. Missing direct booking (only URL provided) but that's likely out of scope. Overall, no critical gaps.
Available Tools
4 toolsadvanced_search_availabilityARead-onlyInspect
[DETAIL TOOL] Search appointment slots/cards. Use directly when clinic is known, or after search_available_clinics_map selection. Params: city (required), medical_service, doctor, clinic, start_day (YYYY-MM-DD). If no results with doctor/clinic filters, automatically retries without them. Returns slots with booking_url for checkout (supports pre-filled params: &name=&last_name=&email=&phone=&tax_code=).
| Name | Required | Description | Default |
|---|---|---|---|
| city | Yes | City name (full or partial, e.g., 'Firenze', 'Milan') | |
| clinic | No | Clinic/facility name (full or partial) | |
| doctor | No | Doctor name (full or partial, e.g., 'Mario Rossi') | |
| detailed | No | Return full slot details (default: false, returns summary) | |
| start_day | No | Starting date for availability search (YYYY-MM-DD, default: today) | |
| medical_service | No | Service name or symptom description (full or partial, e.g., 'Rx Braccio', 'blood test', 'back pain') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true, and the description adds valuable behavioral detail: automatic retry without filters if no results, and that results include a booking_url with pre-filled parameters.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with a label, purpose, usage, parameters, and output. The parameter list is somewhat redundant with the schema, but remains efficient and informative.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description adequately describes the return format (slots with booking_url) and usage context, making it complete for an agent to invoke correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers all 6 parameters fully (100% coverage). The description adds context about the auto-retry behavior related to doctor/clinic filters and notes the start_day format, which provides incremental value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Search appointment slots/cards' and positions itself as a 'DETAIL TOOL' to be used after clinic selection, distinguishing it from sibling tools like search_available_clinics_map.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use directly when clinic is known, or after search_available_clinics_map selection' and mentions auto-retry behavior, providing clear when-to-use guidance. Could be improved by stating when not to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_available_clinics_mapARead-onlyInspect
[MAP TOOL] Discover clinics with availability and render them on map/sidebar. Params: city (required) + one of medical_service/doctor/clinic, optional date/time range. Then wait for user click in widget; after selection run advanced_search_availability with city + medical_service + selected clinic.
| Name | Required | Description | Default |
|---|---|---|---|
| city | Yes | City name (full or partial, e.g., 'Firenze', 'Milan') | |
| clinic | No | Clinic/facility name (optional, narrows clinic search) | |
| doctor | No | Doctor name (optional, narrows clinic search) | |
| end_day | No | End date (YYYY-MM-DD) | |
| distance | No | Max distance in km (default: 15) | |
| end_time | No | Daily end time HH:mm (e.g., 18:00) | |
| start_day | No | Start date (YYYY-MM-DD, default: today) | |
| start_time | No | Daily start time HH:mm (e.g., 09:00) | |
| medical_service | No | Service name or symptom description (e.g., 'ecografia addome completo') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true and destructiveHint=false. The description adds behavioral context: the tool renders a map/sidebar and waits for user interaction before triggering another call. This goes beyond annotations by detailing the interactive flow, though it does not mention rate limits or authentication.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences: the first states purpose and parameter summary, the second specifies the workflow. It is front-loaded, concise, and contains no extraneous information. Every sentence serves a clear purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the parameter count (9) and no output schema, the description is fairly complete. It explains the tool's role in a multi-step process, the required parameters, and the subsequent tool call. However, it could briefly mention the absence of a direct return value or the widget's behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with descriptions for all 9 parameters. The description adds value by summarizing parameter groupings and requirements: 'city (required) + one of medical_service/doctor/clinic, optional date/time range.' This clarifies relationships and default behavior beyond individual schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Discover clinics with availability and render them on map/sidebar.' It specifies the action and resource, but does not explicitly differentiate from sibling tools like 'advanced_search_availability' or 'smart_suggest_service', though it implies a workflow dependency.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: 'Params: city (required) + one of medical_service/doctor/clinic, optional date/time range. Then wait for user click in widget; after selection run advanced_search_availability...' This provides a clear workflow and directs the agent to the next step, distinguishing from siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_suggest_cityARead-onlyInspect
Find cities by partial name. Use when: User mentions unfamiliar city or you need coordinates. Returns: City names, coordinates, province, region. Follow: Use city name with advanced_search_availability.
| Name | Required | Description | Default |
|---|---|---|---|
| term | Yes | Partial city name (e.g., 'milan', 'rome') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readonly, non-destructive. Description adds behavioral info: searches by partial name, returns specific fields (city names, coordinates, province, region). No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Compact at 3 sentences, front-loads purpose. The follow-up instruction is slightly extraneous but relevant and doesn't harm conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Simple tool (1 param, no output schema). Description lists return fields and provides usage context. Sufficient for effective invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and the description's parameter note ('Partial city name...') exactly matches the schema description. No additional meaning beyond schema; baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States 'Find cities by partial name' – clear verb+resource. Also specifies usage scenarios (unfamiliar city or needing coordinates) and mentions relationship with sibling tool advanced_search_availability, effectively distinguishing it.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'User mentions unfamiliar city or you need coordinates.' Also gives follow-up guidance. Lacks explicit when-not-to-use or alternatives, but provides clear context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
smart_suggest_serviceARead-onlyInspect
Smart service finder (text search + AI combined). Use when: User has vague symptoms/descriptions OR you need to explore service options before booking. Runs parallel search and AI prediction, merges results by relevance. Returns: Service names and IDs. Follow: Use service info with advanced_search_availability.
| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes | Service name, symptom, or medical need description (e.g., 'blood test', 'back pain', 'x-ray arm') | |
| top_k | No | Number of results to return (default: 5, max: 10) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare read-only and non-destructive; description adds that it runs parallel search and AI prediction, merges by relevance, and returns service names and IDs. Lacks edge-case warnings but sufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is concise with front-loaded purpose and usage, though uses informal bullet points within text. No wasted sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers purpose, usage, behavior, output, and next steps. Lacks output format details but adequate for a search tool with two params and no output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline 3. Description matches schema for text and top_k but does not add new semantic meaning beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it is a 'Smart service finder (text search + AI combined)' that returns service names and IDs, distinct from sibling tools like advanced_search_availability.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'User has vague symptoms/descriptions OR you need to explore service options before booking.' Also advises follow-up with advanced_search_availability.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!