Skip to main content
Glama

Server Details

Search and book theatre, attractions, tours across 681 cities. 13,090+ products.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
tickadoo/tickadoo-mcp
GitHub Stars
0
Server Listing
tickadoo

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.2/5 across 22 of 22 tools scored. Lowest: 3.6/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with specific usage conditions. Overlapping tools like check_availability and get_availability are differentiated by legacy vs live. Planning tools target different audiences and timeframes.

Naming Consistency5/5

All tools follow a consistent verb_noun pattern in snake_case. Even 'whats_on_tonight' matches the convention. No mixing of styles.

Tool Count3/5

22 tools is on the high end of the borderline range. While each tool has a specific use, the number feels slightly heavy for a single server, potentially overwhelming for an agent.

Completeness3/5

Coverage is broad including search, planning, and city info, but lacks a booking/purchase tool. Agents can discover and compare but cannot complete a booking, leaving a workflow gap.

Available Tools

22 tools
check_availabilityA
Read-only
Inspect

Use this when the user has picked a specific experience and asks whether it is available on one date, what it costs for a party, or wants a booking link. This is the legacy-compatible date-specific availability interface.

ParametersJSON Schema
NameRequiredDescriptionDefault
dateYesDate to check in YYYY-MM-DD format.
slugYesProduct slug or legacy booking path, e.g. "london-dungeon-tickets" or "/london/london-dungeon-tickets".
formatNoResponse shape. 'json' returns structured records; 'text' returns the same content rendered as a short narrative paragraph for chat surfaces.json
languageNoBCP-47 language code for human-readable fields (e.g., 'en', 'fr-FR'). Defaults to English when omitted.
party_sizeNoNumber of guests or tickets to price. Default 2.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond what annotations provide. While annotations indicate read-only, non-destructive, and open-world characteristics, the description specifies that this is a 'quick' check, returns 'availability for one date only', includes 'booking URL and Ghost Checkout intent-token payload metadata', and supports '40+ languages'. This provides practical implementation details the agent needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with two sentences that each serve distinct purposes: the first explains what the tool does and returns, the second provides usage guidance with concrete examples. Every element adds value with zero wasted words, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only query tool with comprehensive annotations and schema coverage, the description provides excellent context about what information is returned (availability, party total, booking URL, metadata) and when to use it. The only minor gap is the lack of output schema, but the description compensates by specifying the return types. The tool's relatively simple purpose is well-covered.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already thoroughly documents all parameters. The description adds some semantic context by mentioning '40+ languages' and providing language examples, and clarifying this is for 'one date only', but doesn't significantly enhance parameter understanding beyond what the schema provides. The baseline of 3 is appropriate given the comprehensive schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('check', 'returns') and resources ('availability for a specific tickadoo experience'). It distinguishes from siblings by emphasizing it's a 'quick date-specific availability check' that returns limited information compared to more comprehensive tools like get_experience_details.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance with concrete examples: 'Use when the user asks "is this available on Saturday?" or wants a fast price check without the full experience detail payload.' This clearly indicates when to use this tool versus alternatives like get_experience_details that would provide more comprehensive information.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compare_experiencesA
Read-onlyIdempotent
Inspect

Use this when the user wants a side-by-side comparison of 2-5 specific products. Pass the slug for each. Returns a comparison table plus per-axis winners (value, rating, popularity, family-fit).

ParametersJSON Schema
NameRequiredDescriptionDefault
slugsYesProduct slugs from earlier search or recommend results.
formatNoResponse shape. 'json' returns structured records; 'text' returns the same content rendered as a short narrative paragraph for chat surfaces.json
languageNoBCP-47 language code for human-readable fields (e.g., 'en', 'fr-FR'). Defaults to English when omitted.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=true, covering safety and scope. The description adds valuable context beyond annotations: it specifies the comparison dimensions (price, duration, reviews, etc.), winner categories, and language support for localized URLs, which helps the agent understand the tool's behavior and output structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by key output details and parameter guidance. Every sentence adds value without redundancy, and it efficiently covers comparison scope, return values, and language support in a compact form.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (comparing multiple experiences with structured outputs), annotations cover safety and scope well, and schema coverage is complete. The description adds necessary context about comparison dimensions and language support. However, without an output schema, it could more explicitly detail the return structure (e.g., format of 'winner callouts'), though it hints at this.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents parameters. The description adds some meaning by explaining the language parameter supports '40+ languages' and gives examples, but does not provide additional semantics beyond what the schema already covers for 'slugs' or 'format'. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('compare 2 to 5 tickadoo experiences side-by-side') and resource ('tickadoo experiences'), distinguishing it from siblings like 'get_experience_details' (single experience) or 'search_experiences' (searching rather than comparing). It precisely defines the scope and output.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool (comparing multiple experiences for evaluation), but does not explicitly state when not to use it or name alternatives among siblings. It implies usage through the comparison focus but lacks explicit exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_nearby_experiencesA
Read-onlyIdempotent
Inspect

Use this when a non-ChatGPT client supplies exact latitude and longitude and wants experiences near that coordinate. ChatGPT clients should use search_local_experiences instead because it accepts coarse place hints.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
formatNoResponse shape. 'json' returns structured records; 'text' returns the same content rendered as a short narrative paragraph for chat surfaces.json
categoryNoOptional category slug.
languageNoBCP-47 language code for human-readable fields (e.g., 'en', 'fr-FR'). Defaults to English when omitted.
latitudeYesLatitude coordinate.
longitudeYesLongitude coordinate.
radius_kmNoSearch radius in kilometres.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, openWorldHint=true, and destructiveHint=false, so the agent knows this is a safe, non-destructive query operation. The description adds valuable context beyond annotations by mentioning support for 40+ languages and localized booking URLs, which are behavioral traits not captured in the structured annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences: purpose statement, key feature highlights (date filtering, language support), and usage guideline. Every sentence earns its place with no redundant information, making it appropriately sized and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (19 parameters) and lack of output schema, the description provides good contextual completeness. It covers the core purpose, key optional features (date filtering, language support), and clear usage guidelines. While it doesn't explain return values (which would be helpful without an output schema), it adequately addresses what's needed for a query tool with comprehensive annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents all 19 parameters thoroughly. The description adds minimal parameter semantics beyond the schema, only briefly mentioning date filtering and language support. It doesn't provide additional syntax, format details, or usage examples that aren't already in the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Find') and resource ('shows, events and experiences near a geographic location on tickadoo'), making the purpose specific. It distinguishes from siblings by focusing on location-based discovery rather than availability checking, city guides, or other specialized searches.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'Use when a user shares their location or asks for things to do near them.' This provides clear context for invocation and distinguishes it from sibling tools that serve different purposes like checking availability or getting details.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_availabilityA
Read-only
Inspect

Use this when the user is ready to check live bookable dates, times, prices, or remaining spaces for one selected product. This is the live supplier-check tool; pass product_id from search or slug plus city_slug.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugNoProduct slug. Use with city_slug when product_id is unavailable.
as_ofNoPrevious ISO timestamp from stale card data; returns delta only when data changed.
freshNoWhen true, bypasses the 60-second cache and performs a live supplier check, subject to rate limits.
date_toNoEnd date in YYYY-MM-DD. Defaults to today plus 14 days and is capped to a 90-day window.
city_slugNoCity slug required when slug is used.
date_fromNoStart date in YYYY-MM-DD. Defaults to today.
party_sizeNoNumber of guests. Default 2.
product_idNoStable product_id from a search or details response. Preferred when available.
preferred_timeNoOptional broad time preference such as morning, afternoon, or evening.
idempotency_keyNoOptional UUID. Reuse for identical responses within five minutes.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only and non-destructive. Description adds context of being a live supplier check, implying external API calls and potential rate limits. Detailed behavioral traits (cache, delta, idempotency) are in parameter descriptions, complementing annotations effectively.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is two sentences, front-loading purpose and usage. Efficient but could slightly improve by clarifying differentiation from sibling tool 'check_availability'.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 10 parameters and no required ones, the description covers core purpose and usage pattern. Lacks mention of defaults or output format, but parameter descriptions fill in details. Adequate for a read-only tool with rich schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description reinforces usage of product_id or slug+city_slug but adds no new semantic info beyond what the parameter descriptions already provide.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool checks live bookable dates, times, prices, or remaining spaces for one selected product, and identifies itself as the live supplier-check tool. However, it does not explicitly differentiate from the sibling 'check_availability', which could cause confusion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides when to use (ready to check live data) and how to pass parameters (product_id or slug+city_slug). Lacks guidance on when not to use or how to choose between siblings like 'check_availability' and 'get_availability'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_city_guideA
Read-onlyIdempotent
Inspect

Use this when the user wants an orientation overview of a city for trip planning. Returns highlights, dominant categories, price band, best-for audience hints, seasonal notes, and a short list of local advice items.

ParametersJSON Schema
NameRequiredDescriptionDefault
cityYesCity slug.
formatNoResponse shape. 'json' returns structured records; 'text' returns the same content rendered as a short narrative paragraph for chat surfaces.json
languageNoBCP-47 language code for human-readable fields (e.g., 'en', 'fr-FR'). Defaults to English when omitted.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=true, covering safety and scope. The description adds valuable context beyond this: it specifies the tool supports 40+ languages for localized booking URLs, which is not captured in annotations, enhancing behavioral understanding without contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, starting with the core purpose and key features, followed by usage guidelines. Every sentence adds value without redundancy, making it efficient and well-structured for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (3 parameters, 100% schema coverage, no output schema) and rich annotations, the description is largely complete. It covers purpose, usage, and key features like language support, though it could briefly mention the response format options or audience signals more explicitly for full completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents parameters like city, format, and language. The description adds minimal semantic value beyond the schema, such as mentioning language support for booking URLs, but does not significantly enhance parameter understanding, warranting the baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('return', 'summarises') and resources ('curated city overview', 'destination'), and distinguishes it from siblings by emphasizing it provides a 'pre-arrival city briefing instead of a raw search', unlike search-oriented tools like search_experiences or find_nearby_experiences.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('when a user asks "tell me about things to do in Prague" or wants a pre-arrival city briefing instead of a raw search'), providing clear context and distinguishing it from alternatives like raw search tools, with no misleading guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_date_nightA
Read-onlyIdempotent
Inspect

Use this when the user wants an evening plan for two. Returns a pre-dinner activity, dinner area suggestion, evening show, post-show tip, and an estimated total cost. Filters out family-rated and high-physical-level venues.

ParametersJSON Schema
NameRequiredDescriptionDefault
cityYesCity slug.
dateNoOptional ISO date (YYYY-MM-DD).
budgetNoBudget band. 'low' = under 100 per person, 'medium' = 100-200, 'high' = 200+. In the listed currency.
formatNoResponse shape. 'json' returns structured records; 'text' returns the same content rendered as a short narrative paragraph for chat surfaces.json
languageNoBCP-47 language code for human-readable fields (e.g., 'en', 'fr-FR'). Defaults to English when omitted.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnly and idempotent. The description adds that it filters out family-rated and high-physical-level venues, which is useful behavioral context beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no redundancy: first states usage, second describes outputs and filters. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description covers the main return fields and filtering. It could mention language/date parameters but is still fairly complete for the complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already documents parameters well. The description adds no additional parameter-level details beyond the schema, earning a baseline 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool creates an evening plan for two, lists specific return items, and mentions filtering criteria, distinguishing it from sibling tools like get_family_day or whats_on_tonight.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It explicitly says 'Use this when the user wants an evening plan for two', providing clear context. However, it does not mention when not to use or directly name alternative tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_experience_detailsA
Read-onlyIdempotent
Inspect

Use this when the user selects a specific experience from search results and needs richer product, location, supplier, and booking fields. Accepts either product_id or slug.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugNoProduct slug from a previous search result.
formatNoResponse shape. 'json' returns structured records; 'text' returns the same content rendered as a short narrative paragraph for chat surfaces.json
languageNoBCP-47 language code for human-readable fields (e.g., 'en', 'fr-FR'). Defaults to English when omitted.
product_idNoStable product_id from a previous search result.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it explains the 40+ language support for localized booking URLs and clarifies the preferred vs. legacy input parameters. While annotations already declare readOnlyHint=true and destructiveHint=false, the description provides practical implementation details that help the agent use the tool correctly.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly front-loaded with the core purpose in the first sentence, followed by specific implementation guidance. Every sentence earns its place by providing essential context about parameter preferences and language support without any redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only tool with comprehensive schema documentation and no output schema, the description provides excellent contextual completeness. It covers the tool's purpose, usage guidelines, parameter preferences, and language support. The only minor gap is not explicitly mentioning the response format options, though this is covered in the schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already documents all parameters thoroughly. The description adds some context about parameter preferences ('Prefer passing the tickadoo slug... provider and provider_id are legacy fallback inputs') and language support, but doesn't provide significant additional semantic meaning beyond what's in the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get detailed availability, venue details, and images') and resource ('for a specific tickadoo experience'). It distinguishes from siblings by focusing on detailed information for a single experience rather than searching, comparing, or listing multiple experiences.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'Prefer passing the tickadoo slug or booking URL path' and identifies fallback inputs. It distinguishes from alternatives by specifying this is for detailed information about a specific experience, not for searching or comparing multiple experiences like 'search_experiences' or 'compare_experiences'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_family_dayA
Read-onlyIdempotent
Inspect

Use this when the user wants a full-day plan for a family in one city. Returns a morning activity, lunch area suggestion, afternoon attraction, and optional evening stop. Uses age-aware filters and clusters venues by walking distance.

ParametersJSON Schema
NameRequiredDescriptionDefault
cityYesCity slug.
dateNoOptional ISO date (YYYY-MM-DD).
budgetNoOptional max budget in the listed currency.
formatNoResponse shape. 'json' returns structured records; 'text' returns the same content rendered as a short narrative paragraph for chat surfaces.json
languageNoBCP-47 language code for human-readable fields (e.g., 'en', 'fr-FR'). Defaults to English when omitted.
kids_agesNoChildren's ages. Drives age-suitability filtering on each slot.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=true, indicating a safe, non-destructive, and open-ended operation. The description adds valuable behavioral context beyond annotations: it explains how 'kids_ages' influences filtering (e.g., 'prefers wheelchair-friendly options when toddlers make stroller access likely'), mentions geographic clustering to reduce travel, and notes language support for localized booking URLs. No contradictions with annotations exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first outlines the core functionality and key features, and the second specifies language support. Every sentence adds value without redundancy, making it front-loaded and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (6 parameters, no output schema), the description is largely complete. It covers the tool's purpose, key behavioral traits (e.g., filtering logic, clustering), and language support. However, it does not detail the response format implications (e.g., what 'text' vs. 'json' outputs look like) or potential error cases, leaving minor gaps in contextual understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds some semantic context: it explains that 'kids_ages' is used for 'age-aware filtering' and links it to accessibility preferences, and clarifies that 'language' affects 'localised booking URLs.' However, it does not provide significant additional meaning beyond what the schema descriptions offer, aligning with the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Build a full family day in one city with a morning activity, lunch tip, afternoon attraction, and optional evening stop.' It specifies the verb ('build'), resource ('family day'), and scope ('one city'), and distinguishes from siblings by focusing on comprehensive day planning rather than individual experiences or availability checks.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through features like 'age-aware filtering' and 'geographic clustering,' but does not explicitly state when to use this tool versus alternatives like 'get_city_guide' or 'search_experiences.' It provides some guidance on language support but lacks clear exclusions or comparisons to sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_hidden_gemsA
Read-onlyIdempotent
Inspect

Use this when the user wants less-popular experiences locals favour rather than top-of-list bestsellers. Returns rows tagged HiddenGem or with high ratings and lower review counts; explicitly excludes Bestseller, HopOnHopOff, and CityPass products.

ParametersJSON Schema
NameRequiredDescriptionDefault
cityYesCity slug.
formatNoResponse shape. 'json' returns structured records; 'text' returns the same content rendered as a short narrative paragraph for chat surfaces.json
languageNoBCP-47 language code for human-readable fields (e.g., 'en', 'fr-FR'). Defaults to English when omitted.
max_resultsNoDefault 5, max 20.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and non-destructive behavior. The description adds meaningful behavioral detail: filtering logic (HiddenGem tags, high ratings, low review counts) and explicit exclusions. It does not cover result ordering or pagination, but the provided context is valuable beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise—two sentences—with no wasted words. It fronts the usage context immediately and packs essential filtering criteria and exclusions efficiently. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (filtered list), the description provides clear purpose, usage, and filtering behavior. It hints at return format ('rows' vs narrative). Missing details like sorting or handling of large result sets are minor gaps; overall it is sufficient for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage for all four parameters. The description does not add significant new information about parameters beyond what the schema already provides. With full schema coverage, a score of 3 is appropriate as the description meets the baseline without extra enrichment.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the tool as returning less-popular, locally-favored experiences, explicitly contrasting with 'top-of-list bestsellers' and specifying exclusion criteria (Bestseller, HopOnHopOff, CityPass). It uses a specific verb ('use') and resource ('hidden gems'), making the purpose unmistakable.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly tells when to use this tool ('when the user wants less-popular experiences locals favour') and what it excludes. While it doesn't name alternative tools, the context clearly distinguishes it from top-list tools, providing sufficient guidance for an AI agent to select it over siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_last_minuteA
Read-only
Inspect

Use this when the user wants experiences starting within the next few hours. Returns rows with start_time, countdown_text, and seats_remaining hints, sorted by soonest first.

ParametersJSON Schema
NameRequiredDescriptionDefault
cityYesCity slug.
hoursNoHow many hours ahead to look (1-12). Default 3.
formatNoResponse shape. 'json' returns structured records; 'text' returns the same content rendered as a short narrative paragraph for chat surfaces.json
languageNoBCP-47 language code for human-readable fields (e.g., 'en', 'fr-FR'). Defaults to English when omitted.
latitudeNoOptional latitude to bias toward nearby venues.
longitudeNoOptional longitude to bias toward nearby venues.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate read-only, open-world, and non-destructive operations, which the description does not contradict. The description adds valuable behavioral context beyond annotations: it specifies sorting by start time, adds countdown text, flags high urgency, and supports 40+ languages with localized URLs. This enriches understanding of the tool's behavior without repeating annotation information.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by additional features. Each sentence adds distinct value: sorting, countdowns, urgency flags, and language support. There is no wasted text, making it efficient and well-structured for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 parameters, no output schema) and rich annotations, the description is mostly complete. It covers the purpose, key behaviors, and language support. However, it lacks details on output format implications (e.g., differences between 'text' and 'json' formats) and does not mention the optional latitude/longitude parameters for location blending, leaving minor gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all parameters. The description adds minimal parameter semantics beyond the schema, mentioning language support and urgency flags, but does not provide additional details on parameter usage or interactions. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Find tickadoo experiences'), resource ('experiences'), and scope ('starting within the next few hours in a city'). It distinguishes from siblings by focusing on imminent starts with countdowns and urgency flags, unlike broader search tools like 'search_experiences' or time-specific ones like 'whats_on_tonight'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: for finding experiences with imminent start times in a city, with urgency indicators. However, it does not explicitly state when NOT to use it or name specific alternatives among the sibling tools, such as 'whats_on_tonight' for evening events or 'search_experiences' for broader searches.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_transfer_infoA
Read-onlyIdempotent
Inspect

Use this when the user is arriving in a supported city and needs transfer guidance from an airport, station, or port to a hotel coordinate. Returns taxi, metro, bus, and train estimates with durations, costs, and directions.

ParametersJSON Schema
NameRequiredDescriptionDefault
cityYesSupported city: London, Paris, New York, Amsterdam, Barcelona, Rome, or Tokyo.
formatNoResponse shape. 'json' returns structured records; 'text' returns the same content rendered as a short narrative paragraph for chat surfaces.json
languageNoBCP-47 language code for human-readable fields (e.g., 'en', 'fr-FR'). Defaults to English when omitted.
from_typeYesArrival hub type.
to_latitudeYesHotel latitude.
to_longitudeYesHotel longitude.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, open-world, and non-destructive behavior. The description adds valuable context beyond this: it specifies the types of transfer options returned (taxi, tube/metro, bus, train), the data included (durations, costs, directions), language support details, and the use of default hubs per city. This enhances understanding without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by supporting details in a logical flow. Every sentence adds value: the first states what the tool does, the second details the output, the third explains language support, and the fourth clarifies hub defaults. There is no wasted text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (6 parameters, no output schema) and rich annotations, the description is largely complete. It covers purpose, output content, language features, and hub behavior. However, it doesn't mention potential limitations like city availability or error handling, which could be useful for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema fully documents all parameters. The description adds minimal extra semantics, such as noting that 'city' uses known default hubs (e.g., Heathrow for London) and that 'language' enables localized booking URLs, but doesn't significantly expand on parameter meanings beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get airport, station, or port transfer options') and resources ('from a city's primary arrival hub to hotel coordinates'), distinguishing it from sibling tools focused on experiences, guides, and availability checks. It precisely defines the scope of what information is retrieved.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use this tool (for transfer options from arrival hubs to hotels) and includes an example of language support. However, it doesn't explicitly state when not to use it or name specific alternatives among the sibling tools, such as 'get_city_guide' for broader information.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_travel_tipsA
Read-onlyIdempotent
Inspect

Use this when the user asks practical logistics questions about a city. Returns short tips grouped by topic (transport, money, safety, culture, food, weather, language, connectivity), plus emergency numbers and quick phrases where relevant.

ParametersJSON Schema
NameRequiredDescriptionDefault
cityYesCity slug.
topicNoOptional topic filter. When omitted, returns a summary across all topics.
formatNoResponse shape. 'json' returns structured records; 'text' returns the same content rendered as a short narrative paragraph for chat surfaces.json
languageNoBCP-47 language code for human-readable fields (e.g., 'en', 'fr-FR'). Defaults to English when omitted.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, openWorldHint=true, and destructiveHint=false, covering safety and scope. The description adds valuable context beyond annotations: it reveals the tool is 'hardcoded' (not dynamically updated), covers '20 launch cities' (limited scope), includes 'emergency numbers and quick local phrases', and supports '40+ languages' with 'localised booking URLs'. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first explains what the tool returns and its features, the second provides usage guidelines. Every sentence adds value without redundancy, making it front-loaded and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (4 parameters, no output schema) and rich annotations, the description is mostly complete. It covers purpose, usage, behavioral context, and key features. However, it doesn't explicitly mention the 'topic' parameter's optional filtering or the 'format' parameter's default behavior, which could be slightly helpful for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all parameters. The description adds minimal parameter semantics: it mentions 'pass a language code' for 'localised booking URLs' and implies the 'city' parameter corresponds to '20 launch cities', but doesn't provide additional details beyond what's in the schema. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'returns hardcoded local insider advice for 20 launch cities' with specific content areas (transport, money, safety, etc.) and distinguishes it from generic guidebook tips. It explicitly contrasts with sibling tools like 'get_city_guide' by emphasizing 'insider advice' rather than comprehensive city information.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage scenarios: 'when a user asks "what should I know before visiting Tokyo?" or wants a hotel pre-arrival briefing beyond generic guidebook tips.' This gives clear context for when to use this tool versus alternatives like 'get_city_guide' or other informational tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_whats_on_this_weekA
Read-only
Inspect

Use this when the user wants a day-by-day weekly calendar for a city. Returns one entry per day for the next 7 days, each with morning, afternoon, and evening picks plus a daily highlight.

ParametersJSON Schema
NameRequiredDescriptionDefault
cityYesCity slug.
formatNoResponse shape. 'json' returns structured records; 'text' returns the same content rendered as a short narrative paragraph for chat surfaces.json
languageNoBCP-47 language code for human-readable fields (e.g., 'en', 'fr-FR'). Defaults to English when omitted.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=true. The description adds valuable context beyond annotations: it specifies the 7-day timeframe, day-by-day breakdown structure with time slots, weekly highlights, and the 40+ language support for localized booking URLs. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two well-structured sentences: first states core functionality with key details (7 days, time slots, highlights, language support), second provides usage examples. Every element earns its place with no redundancy or fluff. Front-loaded with primary purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only tool with good annotations and full schema coverage, the description is largely complete. It covers purpose, usage, and behavioral context. The main gap is lack of output schema, but the description implies the return structure (day-by-day breakdown). Could slightly enhance by mentioning response format implications.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents parameters (city, format, language). The description adds some semantic context: it mentions '40+ languages' and gives examples (e.g., 'de', 'fr', 'es', 'ja'), but doesn't provide additional meaning beyond what's in the schema descriptions. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific verb ('return') and resource ('day-by-day breakdown of top experiences'), with explicit scope ('next 7 days in a city', 'grouped into morning, afternoon, evening slots', 'weekly highlights'). It distinguishes from siblings like 'whats_on_tonight' (single day) and 'get_city_guide' (general guide).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: 'Use when a user says things like "what's on this week in Paris?" or "I'm in London for the next few days, what should I do each day?"' This provides clear user intent examples that differentiate it from tools like 'get_last_minute' (short-notice) or 'search_by_mood' (mood-based).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_citiesA
Read-onlyIdempotent
Inspect

Use this when the user wants to browse supported cities before searching. Returns city names, slugs, country codes, and product counts.

ParametersJSON Schema
NameRequiredDescriptionDefault
formatNoResponse shape. 'json' returns structured records; 'text' returns the same content rendered as a short narrative paragraph for chat surfaces.json
countryNoOptional country code or country name filter.
languageNoBCP-47 language code for human-readable fields (e.g., 'en', 'fr-FR'). Defaults to English when omitted.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover read-only, open-world, and non-destructive aspects, so the description adds minimal behavioral context. It mentions 'bookable experiences' and 'available destinations,' which hints at scope, but lacks details on rate limits, authentication needs, or pagination behavior beyond the schema's 'limit' parameter.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence and uses a second sentence for usage context, with no wasted words or redundant information, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (list operation with no output schema), annotations covering safety, and full schema coverage, the description is mostly complete. However, it could better address output expectations (e.g., format implications) and integration with sibling tools for a higher score.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents all parameters. The description does not add any parameter-specific semantics beyond what the schema provides, such as explaining how 'query' interacts with 'bookable experiences,' so it meets the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('List all cities') and resource ('where tickadoo has bookable experiences'), distinguishing it from siblings like 'get_city_guide' or 'search_experiences' by focusing on destination discovery rather than detailed information or filtering.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides clear context for when to use the tool ('to help users discover available destinations'), but does not explicitly mention when not to use it or name specific alternatives among the siblings, such as 'search_experiences' for more detailed queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

plan_itineraryA
Read-onlyIdempotent
Inspect

Use this when the user wants a multi-day plan for a single city. Returns morning, afternoon, and evening slots per day, with geographic clustering, category diversity, and a running total cost.

ParametersJSON Schema
NameRequiredDescriptionDefault
cityYesCity slug.
daysYesNumber of days to plan (1-7).
paceNoItinerary density. relaxed = 1-2 stops per day, packed = 4-5.
budgetNoBudget band per day per person.
formatNoResponse shape. 'json' returns structured records; 'text' returns the same content rendered as a short narrative paragraph for chat surfaces.json
audienceNoTarget audience. Currently 'family' is the strongest signal in the catalogue (resolves via the 'family' tag, ~3,200 products). Other values are best-effort and may be sparsely populated; for guaranteed family-suitable results prefer tags=['family'] in addition to or instead of this filter.
languageNoBCP-47 language code for human-readable fields (e.g., 'en', 'fr-FR'). Defaults to English when omitted.
interestsNoFree-text interest seed, e.g., "history, food, river".
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint, openWorldHint, idempotentHint, and destructiveHint, so the safety profile is clear. The description adds context about the output structure (slots per day, clustering, cost) but does not go beyond annotations for behavioral traits like rate limits or failure modes.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence that immediately states the use case, then clearly lists output characteristics. Every word earns its place; no waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (8 parameters, no output schema), the description adequately explains the output format (morning/afternoon/evening slots, clustering, cost). It does not cover edge cases or prerequisites, but the schema covers parameters, so completeness is solid.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already describes all parameters with 100% coverage, so the description does not need to repeat them. The description provides no additional parameter-level detail beyond what is in the schema, so it meets the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool is for multi-day plans for a single city, distinguishing it from sibling tools like get_city_guide or get_date_night. The verb 'plan' and resource 'itinerary' are clear.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description tells the agent when to use it ('when the user wants a multi-day plan for a single city'), which is clear. However, it does not explicitly mention alternatives or when not to use it, so it misses the 'when-not' guidance that would make it a 5.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recommend_experiencesA
Read-onlyIdempotent
Inspect

Use this when the user describes what they want in natural language rather than naming a category. Parses the query for audience, mood, constraints, occasion, and time of day, then returns scored recommendations with a reason field explaining the match.

ParametersJSON Schema
NameRequiredDescriptionDefault
paxNoNumber of people. Default 2.
cityNoOptional city slug. If omitted, the city is parsed from the query.
dateNoOptional ISO date (YYYY-MM-DD) for availability-aware ranking.
limitNoNumber of recommendations to return (1-20). Default 5.
queryYesNatural-language preference, e.g., "romantic evening in Paris under 100 euros" or "rainy day in Edinburgh with kids 8 and 12".
formatNoResponse shape. 'json' returns structured records; 'text' returns the same content rendered as a short narrative paragraph for chat surfaces.json
languageNoBCP-47 language code for human-readable fields (e.g., 'en', 'fr-FR'). Defaults to English when omitted.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, openWorldHint, idempotentHint, and destructiveHint. The description adds value by explaining that the tool returns scored recommendations with a reason field and that it parses query elements like audience, mood, constraints, occasion, and time of day. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single focused paragraph that front-loads the use case, then explains functionality and output. Every sentence adds value, though it could be slightly more compact.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the 7 parameters with 100% schema coverage and no output schema, the description provides sufficient context on the tool's behavior and output structure (scored recommendations with reason). It does not detail the full return shape but is adequate for understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%. The description adds context for the key 'query' parameter by specifying what it parses (audience, mood, constraints, occasion, time of day), which goes beyond the schema's example. For other parameters, the schema already provides adequate descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states that the tool should be used when a user describes their preferences in natural language, and it explains that it parses the query for audience, mood, constraints, occasion, and time of day, then returns scored recommendations with a reason. This is a specific verb-resource pair that also distinguishes it from sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'Use this when the user describes what they want in natural language rather than naming a category,' providing clear guidance on when to use. It does not mention when not to use or name specific alternatives, but the context of many sibling tools implies the distinction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

render_experience_cardsA
Read-onlyIdempotent
Inspect

Use this when search_experiences has already returned product IDs and the user needs those results rendered visually as experience cards. Call this after search, using the stable product IDs only.

ParametersJSON Schema
NameRequiredDescriptionDefault
render_typeYesVisual layout requested for the widget.
experience_idsYesStable product IDs from search_experiences. Pass IDs only, never full product rows.
render_contextYes
idempotency_keyNoOptional UUID. Reuse for identical card render responses within five minutes.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false. The description adds minimal behavioral context (e.g., using stable IDs) but does not significantly expand beyond annotation-provided safety traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose and usage context. No wasted words; every sentence serves a clear function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool is simple, and the description covers its core function and prerequisites. Lacks details on error handling or return behavior, but given the readOnlyHint and idempotence, the context is largely adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 75%, and the description does not elaborate on parameters beyond what the schema provides. The schema itself is well-described, so the description adds marginal value here.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: rendering visual cards from product IDs returned by search_experiences. It specifies the prerequisite (after search) and the input type (stable product IDs), distinguishing it from sibling search tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('after search') and implies when not to (before search). It provides clear context but does not name specific alternative tools, though the workflow is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_by_moodA
Read-onlyIdempotent
Inspect

Use this when the user describes the feeling or vibe they want rather than a category, such as romantic, relaxing, adventurous, family fun, foodie, luxury, or rainy day. Maps the mood to preset search filters and returns matching experiences.

ParametersJSON Schema
NameRequiredDescriptionDefault
cityYesCity name or slug, e.g. "london", "new-york", or "paris".
moodYesMood preset. Valid values: adventurous, romantic, relaxing, family_fun, cultural, thrill_seeking, foodie, budget_friendly, luxury, rainy_day.
formatNoResponse shape. 'json' returns structured records; 'text' returns the same content rendered as a short narrative paragraph for chat surfaces.json
languageNoBCP-47 language code for human-readable fields (e.g., 'en', 'fr-FR'). Defaults to English when omitted.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=true, covering safety and scope. The description adds valuable context: it explains the tool maps moods to multiple filters (audience, tag, setting, rating, price) and supports 40+ languages for localized URLs, which are behavioral details not in annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first explains the core functionality and mood mapping, the second covers language support and usage examples. Every sentence adds value with zero wasted words, making it easy to scan.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (4 parameters, 100% schema coverage, no output schema), the description is mostly complete. It explains the unique mood-based search approach, usage context, and language support. However, it doesn't detail the output format or result structure, which could be helpful since there's no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all parameters. The description adds minimal param semantics: it mentions mood mapping to filters and language support for booking URLs, but doesn't provide syntax or format details beyond what the schema already covers. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches 'tickadoo experiences by emotional intent instead of category' and maps moods to specific filters, distinguishing it from sibling tools like 'search_experiences' which likely uses different criteria. It specifies the verb 'search' and resource 'tickadoo experiences' with a unique approach.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states 'Use when a user says things like...' with concrete examples (e.g., 'something romantic', 'luxury options in Paris'), providing clear when-to-use guidance. It implicitly contrasts with category-based searches by emphasizing emotional intent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_experiencesA
Read-onlyIdempotent
Inspect

Use this when the user names a city plus a category, query, or filter set and wants a ranked list of bookable experiences. Returns products with name, slug, city, category, price, rating, review count, and tags. Pair with get_show_details for richer fields.

ParametersJSON Schema
NameRequiredDescriptionDefault
cityNoCity slug, e.g., "london", "new-york", "paris".
tagsNoFilter by experience tags. Multiple tags are AND-combined (every tag must match) so adding more tags narrows the result set. Use lowercase singular forms; matching is substring-based against the tag array. The canonical tag taxonomy in the catalogue, ordered by frequency, is: 'tour', 'attraction', 'historical', 'outdoor', 'museum', 'family', 'landmark', 'adventure', 'show', 'indoor', 'food & drink', 'transport', 'theatre', 'concert', 'theme park', 'cruise', 'nightlife', 'comedy', 'musical', 'city pass', 'sport', 'dance', 'aquarium', 'zoo', 'gallery', 'opera', 'wellness', 'water park', 'workshop', 'religious', 'festival'. Examples: ['museum'] for museums; ['family','indoor'] for indoor family attractions; ['outdoor','adventure'] for outdoor adventure activities. Other free-form values may match by substring but are not guaranteed.
limitNoNumber of results to return (1-50). Default 10.
queryNoFree-text query matched against title, venue, and description.
formatNoResponse shape. 'json' returns structured records; 'text' returns the same content rendered as a short narrative paragraph for chat surfaces.json
categoryNoCategory slug, e.g., "theatre", "tours", "museums", "attractions".
languageNoBCP-47 language code for human-readable fields (e.g., 'en', 'fr-FR'). Defaults to English when omitted.
max_priceNoMaximum price per person in the listed currency.
min_ratingNoMinimum aggregate rating on a 0-5 scale.
popular_onlyNoDEPRECATED — use restrict_to_top_rated. Alias for the destructive top-rated filter (rating >= 4.5 AND review_count >= 100). The name implies a sort hint but it is a hard WHERE clause. Existing callers continue to work unchanged.
indoor_outdoorNoSetting filter. 'indoor' / 'outdoor' resolve via the products.tags column (~2,100 indoor and ~5,000 outdoor products are tagged in the catalogue). 'either' applies no filter.
min_review_countNoMinimum number of customer reviews. Use to filter out new or sparsely-reviewed products.
restrict_to_top_ratedNoDESTRUCTIVE FILTER. When true, hard-restricts results to products with rating >= 4.5 AND review_count >= 100. This is a WHERE clause, not a sort hint — anything failing the floor is dropped from the result set entirely. Use only when the caller genuinely wants to exclude lower-rated or lesser-reviewed products. For ranking-by-popularity without exclusion, do not set this; results are already ordered by rating and review count.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, openWorldHint=true, and destructiveHint=false, indicating a safe, read-only operation. The description adds valuable context beyond this: it mentions support for 40+ languages with localized booking URLs, which is useful behavioral information not covered by annotations. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, starting with the core functionality and listing key filters. Every sentence adds value, such as the language support and usage guidelines, with no wasted words. It could be slightly more structured but remains efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (21 parameters) and rich schema with 100% coverage, the description provides sufficient context. It explains the tool's purpose, usage, and key features like language support. However, without an output schema, it doesn't detail return values, leaving a minor gap in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all 21 parameters. The description lists optional filters (e.g., category, price, date, sorting) but doesn't add syntax or format details beyond what the schema provides. It offers a high-level overview, but the schema carries the heavy lifting, warranting a baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches for 'shows, theatre, events, tours and experiences in a specific city on tickadoo,' providing a specific verb ('search') and resource ('experiences'). It distinguishes from siblings by focusing on comprehensive search with multiple filters, unlike more specific tools like 'get_experience_details' or 'whats_on_tonight'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'Use when a user asks what to do in a city, wants event/show recommendations, or is looking for tickets.' This provides clear context and distinguishes it from alternatives like 'get_city_guide' or 'search_by_mood' by emphasizing its role in finding bookable experiences.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_local_experiencesA
Read-onlyIdempotent
Inspect

Use this when the user mentions a place, neighbourhood, landmark, or area but does not give exact coordinates. Examples: 'near the Louvre', 'in Trastevere', 'around Times Square', "walking distance from St Paul's Cathedral". Returns experiences matched first by exact venue/neighbourhood, then by city centre fallback. Do not use for general city-wide search; use search_experiences for that.

ParametersJSON Schema
NameRequiredDescriptionDefault
cityNoCity slug or name to disambiguate. Recommended for any place_hint that could exist in multiple cities.
tagsNoFilter by experience tags. Multiple tags are AND-combined (every tag must match) so adding more tags narrows the result set. Use lowercase singular forms; matching is substring-based against the tag array. The canonical tag taxonomy in the catalogue, ordered by frequency, is: 'tour', 'attraction', 'historical', 'outdoor', 'museum', 'family', 'landmark', 'adventure', 'show', 'indoor', 'food & drink', 'transport', 'theatre', 'concert', 'theme park', 'cruise', 'nightlife', 'comedy', 'musical', 'city pass', 'sport', 'dance', 'aquarium', 'zoo', 'gallery', 'opera', 'wellness', 'water park', 'workshop', 'religious', 'festival'. Examples: ['museum'] for museums; ['family','indoor'] for indoor family attractions; ['outdoor','adventure'] for outdoor adventure activities. Other free-form values may match by substring but are not guaranteed.
limitNo
formatNoResponse shape. 'json' returns structured records; 'text' returns the same content rendered as a short narrative paragraph for chat surfaces.json
date_toNo
languageNoBCP-47 language code for human-readable fields (e.g., 'en', 'fr-FR'). Defaults to English when omitted.
date_fromNo
place_hintYesFree-text place reference: a landmark, neighbourhood, monument, station, or street name. E.g. 'Louvre Museum', 'Trastevere', 'Soho', 'Times Square'. Required.
radius_hintNoOptional: how broadly to interpret the place_hint. 'walking' is ~1km, 'short_drive' is ~5km, 'city_wide' falls back to the whole city. Default 'walking'.
neighbourhoodNoOptional: a known neighbourhood within the city to narrow results.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses the matching order behavior (exact venue/neighbourhood first, then city centre fallback), going beyond what annotations provide. Annotations already indicate read-only and idempotent, so no contradiction. However, it doesn't detail edge cases like ambiguous place_hints or pagination, which would improve transparency further.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences long, starts with the core usage instruction, provides examples, and includes a clear exclusion. Every sentence adds value with no redundancy. It is perfectly front-loaded and concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 10 parameters and no output schema, the description adequately explains the core functionality and differentiation from siblings. It covers the main use case well. However, it could be more complete by addressing edge cases like invalid place_hints or the interaction with radius_hint, which are not mentioned.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 70% schema description coverage, the schema already documents most parameters well. The description adds context for the overall search logic and place_hint examples, but doesn't add significant meaning beyond the schema for individual parameters. Some parameters like radius_hint and city are implied but not explained in the description. Hence a baseline score is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: searching for experiences near a specific place, landmark, or neighborhood without exact coordinates. It provides concrete examples and explicitly distinguishes itself from the sibling tool 'search_experiences' by stating 'Do not use for general city-wide search; use search_experiences for that.' This is a specific verb+resource with clear differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: when to use (user mentions a place without coordinates), when not to use (general city-wide search), and gives examples. It also explains fallback behavior ('first by exact venue/neighbourhood, then by city centre fallback'). This is comprehensive and leaves no ambiguity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whats_on_tonightA
Read-only
Inspect

Use this when the user asks what is bookable in a city tonight. Returns experiences with start times tonight, sorted by soonest first; events that have already started are filtered out. Each row includes start_time, countdown_text, venue, and a short urgency hint.

ParametersJSON Schema
NameRequiredDescriptionDefault
cityYesCity slug.
formatNoResponse shape. 'json' returns structured records; 'text' returns the same content rendered as a short narrative paragraph for chat surfaces.json
categoryNoOptional category slug to narrow results.
languageNoBCP-47 language code for human-readable fields (e.g., 'en', 'fr-FR'). Defaults to English when omitted.
max_resultsNoDefault 10, max 30.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it explains automatic filtering (today-only, removes already-started events), sorting logic (by soonest start time with boosts), urgency signals from inventory data, and multi-language support. Annotations cover read-only and non-destructive aspects, but the description enriches understanding of the tool's operational behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured: the first sentence states the core purpose, followed by key features in a logical flow (filtering, sorting, language support), and ends with usage examples. Every sentence adds value without redundancy, making it easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (5 parameters, no output schema), the description is largely complete: it covers purpose, usage, behavioral traits, and language support. However, it doesn't detail the output format or error handling, which could be helpful since there's no output schema. Annotations provide safety context, but some operational gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents all parameters thoroughly. The description adds minimal parameter semantics—it mentions language support for localized URLs but doesn't explain other parameters beyond what the schema provides. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Find bookable experiences happening later today in a city.' It specifies the verb ('find'), resource ('bookable experiences'), and temporal scope ('later today'), and distinguishes it from siblings like 'get_whats_on_this_week' by focusing on today's events with urgency features.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance with examples: 'Use for concierge-style requests like "what's on tonight in London?" or "any shows tonight in Paris?"' It also implicitly distinguishes from siblings by focusing on tonight's events with urgency signals, unlike broader tools like 'search_experiences' or 'get_whats_on_this_week'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.