Skip to main content
Glama

Server Details

Search and book theatre, attractions, tours across 681 cities. 13,090+ products.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
tickadoo/tickadoo-mcp
GitHub Stars
0
Server Listing
tickadoo

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.3/5 across 14 of 14 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, but there is some overlap between 'get_whats_on_this_week' and 'whats_on_tonight' (both focus on upcoming events, with one being weekly and the other daily), and 'search_experiences' and 'find_nearby_experiences' (both search for experiences, though one is city-based and the other location-based). Descriptions help clarify, but an agent might occasionally misselect between these pairs.

Naming Consistency5/5

All tool names follow a consistent snake_case pattern with a verb_noun structure (e.g., 'check_availability', 'compare_experiences', 'get_city_guide'). This predictability makes the set easy to navigate and understand at a glance.

Tool Count5/5

With 14 tools, the count is well-scoped for a travel and experience booking domain. Each tool addresses a specific use case, from availability checks to city guides and family planning, without feeling bloated or insufficient.

Completeness4/5

The toolset covers a broad range of travel planning needs, including search, comparison, details, availability, and contextual tools like transfers and tips. A minor gap is the lack of direct booking or payment tools, but the descriptions mention booking URLs and intent tokens, suggesting agents can work around this for core workflows.

Available Tools

14 tools
check_availabilityA
Read-only
Inspect

Quick date-specific availability check for a specific tickadoo experience. Returns availability for one date only, plus party total, booking URL, and Ghost Checkout intent-token payload metadata. Supports 40+ languages — pass a language code (e.g. 'de', 'fr', 'es', 'ja') to get localised booking URLs. Use when the user asks "is this available on Saturday?" or wants a fast price check without the full experience detail payload.

ParametersJSON Schema
NameRequiredDescriptionDefault
dateYesDate to check in ISO format YYYY-MM-DD (e.g. '2026-04-05')
slugYesTickadoo slug or booking path, e.g. 'london-dungeon-tickets' or '/london/london-dungeon-tickets'
formatNoResponse format: text (default) or jsontext
languageNoSupported language code for localised booking URLs (e.g. 'en', 'de', 'fr', 'es', 'ja', 'pt-br')en
party_sizeNoNumber of guests or tickets to price (default 2, max 50)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond what annotations provide. While annotations indicate read-only, non-destructive, and open-world characteristics, the description specifies that this is a 'quick' check, returns 'availability for one date only', includes 'booking URL and Ghost Checkout intent-token payload metadata', and supports '40+ languages'. This provides practical implementation details the agent needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with two sentences that each serve distinct purposes: the first explains what the tool does and returns, the second provides usage guidance with concrete examples. Every element adds value with zero wasted words, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only query tool with comprehensive annotations and schema coverage, the description provides excellent context about what information is returned (availability, party total, booking URL, metadata) and when to use it. The only minor gap is the lack of output schema, but the description compensates by specifying the return types. The tool's relatively simple purpose is well-covered.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already thoroughly documents all parameters. The description adds some semantic context by mentioning '40+ languages' and providing language examples, and clarifying this is for 'one date only', but doesn't significantly enhance parameter understanding beyond what the schema provides. The baseline of 3 is appropriate given the comprehensive schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('check', 'returns') and resources ('availability for a specific tickadoo experience'). It distinguishes from siblings by emphasizing it's a 'quick date-specific availability check' that returns limited information compared to more comprehensive tools like get_experience_details.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance with concrete examples: 'Use when the user asks "is this available on Saturday?" or wants a fast price check without the full experience detail payload.' This clearly indicates when to use this tool versus alternatives like get_experience_details that would provide more comprehensive information.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compare_experiencesA
Read-only
Inspect

Compare 2 to 5 tickadoo experiences side-by-side. Returns winner callouts for best_value, highest_rated, most_popular, and best_for_families, plus key differences across price, duration, reviews, accessibility, and cancellation policy. Supports 40+ languages — pass a language code (e.g. 'de', 'fr', 'es', 'ja') to get localised booking URLs.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugsYesArray of 2-5 tickadoo slugs or booking paths to compare side-by-side.
formatNoResponse format: text (default) or jsontext
languageNoSupported language code for localised booking URLs (e.g. 'en', 'de', 'fr', 'es', 'ja', 'pt-br')en
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=true, covering safety and scope. The description adds valuable context beyond annotations: it specifies the comparison dimensions (price, duration, reviews, etc.), winner categories, and language support for localized URLs, which helps the agent understand the tool's behavior and output structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by key output details and parameter guidance. Every sentence adds value without redundancy, and it efficiently covers comparison scope, return values, and language support in a compact form.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (comparing multiple experiences with structured outputs), annotations cover safety and scope well, and schema coverage is complete. The description adds necessary context about comparison dimensions and language support. However, without an output schema, it could more explicitly detail the return structure (e.g., format of 'winner callouts'), though it hints at this.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents parameters. The description adds some meaning by explaining the language parameter supports '40+ languages' and gives examples, but does not provide additional semantics beyond what the schema already covers for 'slugs' or 'format'. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('compare 2 to 5 tickadoo experiences side-by-side') and resource ('tickadoo experiences'), distinguishing it from siblings like 'get_experience_details' (single experience) or 'search_experiences' (searching rather than comparing). It precisely defines the scope and output.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool (comparing multiple experiences for evaluation), but does not explicitly state when not to use it or name alternatives among siblings. It implies usage through the comparison focus but lacks explicit exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_nearby_experiencesA
Read-only
Inspect

Find shows, events and experiences near a geographic location on tickadoo. Supports optional date filtering with dateFrom/dateTo. Supports 40+ languages — pass a language code (e.g. 'de', 'fr', 'es', 'ja') to get localised booking URLs. Use when a user shares their location or asks for things to do near them.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoSort orderrelevance
tagsNoOptional comma-separated tag filter. Results must match at least one tag. Valid tags: Musical, WestEnd, WalkingTour, FoodTour, Museum, Outdoor, HiddenGem, MustSee, Bestseller, Cruise, DayTrip, SkipTheLine, HopOnHopOff, WaterSport, Spa, BikeTour, Adventure, GuidedTour, Attraction, Transfer, SelfGuided, KidsAttraction, Show, Concert, Helicopter, WhaleWatching, Dining, Workshop, NightLife, Safari, Evening, Morning, Seasonal
dateToNoOptional end date filter in ISO date format YYYY-MM-DD (e.g. '2026-03-28'). Must be used together with dateFrom.
formatNoResponse format: text (default) or jsontext
settingNoIndoor/outdoor filter
audienceNoAudience filter: Family, Couples, AdultsOnly, Kids, Seniors, Groups, Solo
dateFromNoOptional start date filter in ISO date format YYYY-MM-DD (e.g. '2026-03-27'). Must be used together with dateTo.
languageNoSupported language code for localised booking URLs (e.g. 'en', 'de', 'fr', 'es', 'ja', 'pt-br')en
latitudeYesLatitude
longitudeYesLongitude
radius_kmNoSearch radius in km (default 25)
min_ratingNoMinimum rating (e.g. 4.5)
max_resultsNoMaximum number of experiences to return (default 10, max 50)
max_durationNoMax duration in minutes
min_durationNoMin duration in minutes
physical_levelNoPhysical difficulty filter
free_cancellationNoFilter for free cancellation
available_languageNoLanguage filter (ISO 639-1 code)
wheelchair_accessibleNoFilter for wheelchair-accessible experiences
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, openWorldHint=true, and destructiveHint=false, so the agent knows this is a safe, non-destructive query operation. The description adds valuable context beyond annotations by mentioning support for 40+ languages and localized booking URLs, which are behavioral traits not captured in the structured annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences: purpose statement, key feature highlights (date filtering, language support), and usage guideline. Every sentence earns its place with no redundant information, making it appropriately sized and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (19 parameters) and lack of output schema, the description provides good contextual completeness. It covers the core purpose, key optional features (date filtering, language support), and clear usage guidelines. While it doesn't explain return values (which would be helpful without an output schema), it adequately addresses what's needed for a query tool with comprehensive annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents all 19 parameters thoroughly. The description adds minimal parameter semantics beyond the schema, only briefly mentioning date filtering and language support. It doesn't provide additional syntax, format details, or usage examples that aren't already in the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Find') and resource ('shows, events and experiences near a geographic location on tickadoo'), making the purpose specific. It distinguishes from siblings by focusing on location-based discovery rather than availability checking, city guides, or other specialized searches.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'Use when a user shares their location or asks for things to do near them.' This provides clear context for invocation and distinguishes it from sibling tools that serve different purposes like checking availability or getting details.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_city_guideA
Read-only
Inspect

Return a curated city overview for trip planning. Summarises a destination with top highlights, category breakdown, price range, best_for suggestions, seasonal guidance, insider tips, and audience/tag signals. Supports 40+ languages — pass a language code (e.g. 'de', 'fr', 'es', 'ja') to get localised booking URLs. Use when a user asks "tell me about things to do in Prague" or wants a pre-arrival city briefing instead of a raw search.

ParametersJSON Schema
NameRequiredDescriptionDefault
cityYesCity name or slug (e.g. 'london', 'prague', 'new-york', 'rome')
formatNoResponse format: text (default) or jsontext
languageNoSupported language code for localised booking URLs (e.g. 'en', 'de', 'fr', 'es', 'ja', 'pt-br')en
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=true, covering safety and scope. The description adds valuable context beyond this: it specifies the tool supports 40+ languages for localized booking URLs, which is not captured in annotations, enhancing behavioral understanding without contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, starting with the core purpose and key features, followed by usage guidelines. Every sentence adds value without redundancy, making it efficient and well-structured for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (3 parameters, 100% schema coverage, no output schema) and rich annotations, the description is largely complete. It covers purpose, usage, and key features like language support, though it could briefly mention the response format options or audience signals more explicitly for full completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents parameters like city, format, and language. The description adds minimal semantic value beyond the schema, such as mentioning language support for booking URLs, but does not significantly enhance parameter understanding, warranting the baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('return', 'summarises') and resources ('curated city overview', 'destination'), and distinguishes it from siblings by emphasizing it provides a 'pre-arrival city briefing instead of a raw search', unlike search-oriented tools like search_experiences or find_nearby_experiences.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('when a user asks "tell me about things to do in Prague" or wants a pre-arrival city briefing instead of a raw search'), providing clear context and distinguishing it from alternatives like raw search tools, with no misleading guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_experience_detailsA
Read-only
Inspect

Get detailed availability, venue details, and images for a specific tickadoo experience. Prefer passing the tickadoo slug or booking URL path; provider and provider_id are legacy fallback inputs. Supports 40+ languages — pass a language code (e.g. 'de', 'fr', 'es', 'ja') to get localised booking URLs.

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNoNumber of days of availability to fetch (default 30, max 180)
slugNoPreferred: tickadoo slug or path, e.g. 'london-dungeon-tickets' or '/london/london-dungeon-tickets'
formatNoResponse format: text (default) or jsontext
languageNoSupported language code for localised booking URLs (e.g. 'en', 'de', 'fr', 'es', 'ja', 'pt-br')en
providerNoLegacy fallback only: hidden provider name used internally
provider_idNoLegacy fallback only: hidden provider-specific product ID
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it explains the 40+ language support for localized booking URLs and clarifies the preferred vs. legacy input parameters. While annotations already declare readOnlyHint=true and destructiveHint=false, the description provides practical implementation details that help the agent use the tool correctly.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly front-loaded with the core purpose in the first sentence, followed by specific implementation guidance. Every sentence earns its place by providing essential context about parameter preferences and language support without any redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only tool with comprehensive schema documentation and no output schema, the description provides excellent contextual completeness. It covers the tool's purpose, usage guidelines, parameter preferences, and language support. The only minor gap is not explicitly mentioning the response format options, though this is covered in the schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already documents all parameters thoroughly. The description adds some context about parameter preferences ('Prefer passing the tickadoo slug... provider and provider_id are legacy fallback inputs') and language support, but doesn't provide significant additional semantic meaning beyond what's in the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get detailed availability, venue details, and images') and resource ('for a specific tickadoo experience'). It distinguishes from siblings by focusing on detailed information for a single experience rather than searching, comparing, or listing multiple experiences.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'Prefer passing the tickadoo slug or booking URL path' and identifies fallback inputs. It distinguishes from alternatives by specifying this is for detailed information about a specific experience, not for searching or comparing multiple experiences like 'search_experiences' or 'compare_experiences'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_family_dayA
Read-only
Inspect

Build a full family day in one city with a morning activity, lunch tip, afternoon attraction, and optional evening stop. Uses kids_ages for age-aware filtering, prefers wheelchair-friendly options when toddlers make stroller access likely, and clusters the day geographically to reduce travel. Supports 40+ languages — pass a language code (e.g. 'de', 'fr', 'es', 'ja') to get localised booking URLs.

ParametersJSON Schema
NameRequiredDescriptionDefault
cityYesCity name or slug, such as 'london', 'new-york', or 'paris'.
dateNoOptional ISO date YYYY-MM-DD for building the day around one travel date.
budgetNoOptional total day budget in the local currency for all selected activities.
formatNoResponse format: text (default) or jsontext
languageNoSupported language code for localised booking URLs (e.g. 'en', 'de', 'fr', 'es', 'ja', 'pt-br')en
kids_agesNoOptional array of child ages. Under 6 prefers easy and shorter stops, ages 6-12 prefer interactive or outdoor options, teens can handle more adventurous picks, and any age under 3 requires wheelchair-accessible options for stroller-friendly planning.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=true, indicating a safe, non-destructive, and open-ended operation. The description adds valuable behavioral context beyond annotations: it explains how 'kids_ages' influences filtering (e.g., 'prefers wheelchair-friendly options when toddlers make stroller access likely'), mentions geographic clustering to reduce travel, and notes language support for localized booking URLs. No contradictions with annotations exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first outlines the core functionality and key features, and the second specifies language support. Every sentence adds value without redundancy, making it front-loaded and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (6 parameters, no output schema), the description is largely complete. It covers the tool's purpose, key behavioral traits (e.g., filtering logic, clustering), and language support. However, it does not detail the response format implications (e.g., what 'text' vs. 'json' outputs look like) or potential error cases, leaving minor gaps in contextual understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds some semantic context: it explains that 'kids_ages' is used for 'age-aware filtering' and links it to accessibility preferences, and clarifies that 'language' affects 'localised booking URLs.' However, it does not provide significant additional meaning beyond what the schema descriptions offer, aligning with the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Build a full family day in one city with a morning activity, lunch tip, afternoon attraction, and optional evening stop.' It specifies the verb ('build'), resource ('family day'), and scope ('one city'), and distinguishes from siblings by focusing on comprehensive day planning rather than individual experiences or availability checks.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through features like 'age-aware filtering' and 'geographic clustering,' but does not explicitly state when to use this tool versus alternatives like 'get_city_guide' or 'search_experiences.' It provides some guidance on language support but lacks clear exclusions or comparisons to sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_last_minuteA
Read-only
Inspect

Find tickadoo experiences starting within the next few hours in a city. Sorts by soonest start time, adds countdown text like "starts in 47 minutes", and flags high urgency when a start is imminent or inventory is low. Supports 40+ languages — pass a language code (e.g. 'de', 'fr', 'es', 'ja') to get localised booking URLs.

ParametersJSON Schema
NameRequiredDescriptionDefault
cityYesCity name or slug, such as 'london', 'new-york', or 'paris'.
hoursNoHow many hours ahead to search for imminent starts (default 3, max 12).
formatNoResponse format: text (default) or jsontext
languageNoSupported language code for localised booking URLs (e.g. 'en', 'de', 'fr', 'es', 'ja', 'pt-br')en
latitudeNoOptional latitude to blend in nearby experiences close to the user's exact location.
longitudeNoOptional longitude to blend in nearby experiences close to the user's exact location.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate read-only, open-world, and non-destructive operations, which the description does not contradict. The description adds valuable behavioral context beyond annotations: it specifies sorting by start time, adds countdown text, flags high urgency, and supports 40+ languages with localized URLs. This enriches understanding of the tool's behavior without repeating annotation information.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by additional features. Each sentence adds distinct value: sorting, countdowns, urgency flags, and language support. There is no wasted text, making it efficient and well-structured for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 parameters, no output schema) and rich annotations, the description is mostly complete. It covers the purpose, key behaviors, and language support. However, it lacks details on output format implications (e.g., differences between 'text' and 'json' formats) and does not mention the optional latitude/longitude parameters for location blending, leaving minor gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all parameters. The description adds minimal parameter semantics beyond the schema, mentioning language support and urgency flags, but does not provide additional details on parameter usage or interactions. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Find tickadoo experiences'), resource ('experiences'), and scope ('starting within the next few hours in a city'). It distinguishes from siblings by focusing on imminent starts with countdowns and urgency flags, unlike broader search tools like 'search_experiences' or time-specific ones like 'whats_on_tonight'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: for finding experiences with imminent start times in a city, with urgency indicators. However, it does not explicitly state when NOT to use it or name specific alternatives among the sibling tools, such as 'whats_on_tonight' for evening events or 'search_experiences' for broader searches.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_transfer_infoA
Read-only
Inspect

Get airport, station, or port transfer options from a city's primary arrival hub to hotel coordinates. Returns taxi, tube/metro, bus, and train estimates with durations, estimated costs, and practical directions. Supports 40+ languages — pass a language code (e.g. 'de', 'fr', 'es', 'ja') to get localised booking URLs. Uses known default hubs per city, for example Heathrow for London airports or Gare du Nord for Paris stations.

ParametersJSON Schema
NameRequiredDescriptionDefault
cityYesSupported city, such as London, Paris, New York, Amsterdam, Barcelona, Rome, or Tokyo.
formatNoResponse format: text (default) or jsontext
languageNoSupported language code for localised booking URLs (e.g. 'en', 'de', 'fr', 'es', 'ja', 'pt-br')en
from_typeYesArrival hub type: airport, station, or port.
to_latitudeYesHotel latitude.
to_longitudeYesHotel longitude.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, open-world, and non-destructive behavior. The description adds valuable context beyond this: it specifies the types of transfer options returned (taxi, tube/metro, bus, train), the data included (durations, costs, directions), language support details, and the use of default hubs per city. This enhances understanding without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by supporting details in a logical flow. Every sentence adds value: the first states what the tool does, the second details the output, the third explains language support, and the fourth clarifies hub defaults. There is no wasted text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (6 parameters, no output schema) and rich annotations, the description is largely complete. It covers purpose, output content, language features, and hub behavior. However, it doesn't mention potential limitations like city availability or error handling, which could be useful for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema fully documents all parameters. The description adds minimal extra semantics, such as noting that 'city' uses known default hubs (e.g., Heathrow for London) and that 'language' enables localized booking URLs, but doesn't significantly expand on parameter meanings beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get airport, station, or port transfer options') and resources ('from a city's primary arrival hub to hotel coordinates'), distinguishing it from sibling tools focused on experiences, guides, and availability checks. It precisely defines the scope of what information is retrieved.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use this tool (for transfer options from arrival hubs to hotels) and includes an example of language support. However, it doesn't explicitly state when not to use it or name specific alternatives among the sibling tools, such as 'get_city_guide' for broader information.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_travel_tipsA
Read-only
Inspect

Return hardcoded local insider advice for 20 launch cities. Covers transport, money, safety, culture, food, weather, language, and connectivity, plus emergency numbers and quick local phrases. Supports 40+ languages — pass a language code (e.g. 'de', 'fr', 'es', 'ja') to get localised booking URLs. Use when a user asks "what should I know before visiting Tokyo?" or wants a hotel pre-arrival briefing beyond generic guidebook tips.

ParametersJSON Schema
NameRequiredDescriptionDefault
cityYesCity name or slug (e.g. 'tokyo', 'paris', 'new-york', 'london')
topicNoOptional topic filter: transport, money, safety, culture, food, weather, language, or connectivity
formatNoResponse format: text (default) or jsontext
languageNoSupported language code for localised booking URLs (e.g. 'en', 'de', 'fr', 'es', 'ja', 'pt-br')en
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, openWorldHint=true, and destructiveHint=false, covering safety and scope. The description adds valuable context beyond annotations: it reveals the tool is 'hardcoded' (not dynamically updated), covers '20 launch cities' (limited scope), includes 'emergency numbers and quick local phrases', and supports '40+ languages' with 'localised booking URLs'. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first explains what the tool returns and its features, the second provides usage guidelines. Every sentence adds value without redundancy, making it front-loaded and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (4 parameters, no output schema) and rich annotations, the description is mostly complete. It covers purpose, usage, behavioral context, and key features. However, it doesn't explicitly mention the 'topic' parameter's optional filtering or the 'format' parameter's default behavior, which could be slightly helpful for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all parameters. The description adds minimal parameter semantics: it mentions 'pass a language code' for 'localised booking URLs' and implies the 'city' parameter corresponds to '20 launch cities', but doesn't provide additional details beyond what's in the schema. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'returns hardcoded local insider advice for 20 launch cities' with specific content areas (transport, money, safety, etc.) and distinguishes it from generic guidebook tips. It explicitly contrasts with sibling tools like 'get_city_guide' by emphasizing 'insider advice' rather than comprehensive city information.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage scenarios: 'when a user asks "what should I know before visiting Tokyo?" or wants a hotel pre-arrival briefing beyond generic guidebook tips.' This gives clear context for when to use this tool versus alternatives like 'get_city_guide' or other informational tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_whats_on_this_weekA
Read-only
Inspect

Return a day-by-day breakdown of the top experiences happening over the next 7 days in a city, grouped into morning, afternoon, and evening slots, with weekly highlights. Supports 40+ languages — pass a language code (e.g. 'de', 'fr', 'es', 'ja') to get localised booking URLs. Use when a user says things like "what's on this week in Paris?" or "I'm in London for the next few days, what should I do each day?"

ParametersJSON Schema
NameRequiredDescriptionDefault
cityYesCity name or slug (e.g. 'london', 'new-york', 'paris', 'tokyo', 'dubai')
formatNoResponse format: text (default) or jsontext
languageNoSupported language code for localised booking URLs (e.g. 'en', 'de', 'fr', 'es', 'ja', 'pt-br')en
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=true. The description adds valuable context beyond annotations: it specifies the 7-day timeframe, day-by-day breakdown structure with time slots, weekly highlights, and the 40+ language support for localized booking URLs. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two well-structured sentences: first states core functionality with key details (7 days, time slots, highlights, language support), second provides usage examples. Every element earns its place with no redundancy or fluff. Front-loaded with primary purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only tool with good annotations and full schema coverage, the description is largely complete. It covers purpose, usage, and behavioral context. The main gap is lack of output schema, but the description implies the return structure (day-by-day breakdown). Could slightly enhance by mentioning response format implications.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents parameters (city, format, language). The description adds some semantic context: it mentions '40+ languages' and gives examples (e.g., 'de', 'fr', 'es', 'ja'), but doesn't provide additional meaning beyond what's in the schema descriptions. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific verb ('return') and resource ('day-by-day breakdown of top experiences'), with explicit scope ('next 7 days in a city', 'grouped into morning, afternoon, evening slots', 'weekly highlights'). It distinguishes from siblings like 'whats_on_tonight' (single day) and 'get_city_guide' (general guide).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: 'Use when a user says things like "what's on this week in Paris?" or "I'm in London for the next few days, what should I do each day?"' This provides clear user intent examples that differentiate it from tools like 'get_last_minute' (short-notice) or 'search_by_mood' (mood-based).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_citiesA
Read-only
Inspect

List all cities where tickadoo has bookable experiences. Use to help users discover available destinations.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of cities to return (default 50)
queryNoOptional city name or slug filter (e.g. 'new', 'paris', 'tokyo')
formatNoResponse format: text (default) or jsontext
languageNoSupported language code for localised booking URLs (e.g. 'en', 'de', 'fr', 'es', 'ja', 'pt-br')en
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover read-only, open-world, and non-destructive aspects, so the description adds minimal behavioral context. It mentions 'bookable experiences' and 'available destinations,' which hints at scope, but lacks details on rate limits, authentication needs, or pagination behavior beyond the schema's 'limit' parameter.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence and uses a second sentence for usage context, with no wasted words or redundant information, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (list operation with no output schema), annotations covering safety, and full schema coverage, the description is mostly complete. However, it could better address output expectations (e.g., format implications) and integration with sibling tools for a higher score.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents all parameters. The description does not add any parameter-specific semantics beyond what the schema provides, such as explaining how 'query' interacts with 'bookable experiences,' so it meets the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('List all cities') and resource ('where tickadoo has bookable experiences'), distinguishing it from siblings like 'get_city_guide' or 'search_experiences' by focusing on destination discovery rather than detailed information or filtering.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides clear context for when to use the tool ('to help users discover available destinations'), but does not explicitly mention when not to use it or name specific alternatives among the siblings, such as 'search_experiences' for more detailed queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_by_moodA
Read-only
Inspect

Search tickadoo experiences by emotional intent instead of category. Maps moods (adventurous, romantic, relaxing, family_fun, cultural, thrill_seeking, foodie, budget_friendly, luxury, rainy_day) to the most relevant audience, tag, setting, rating, and price filters, then runs a city search. Supports 40+ languages — pass a language code (e.g. 'de', 'fr', 'es', 'ja') to get localised booking URLs. Use when a user says things like "something romantic", "we need to relax", "kids are bored", or "luxury options in Paris".

ParametersJSON Schema
NameRequiredDescriptionDefault
cityYesCity name or slug (e.g. 'london', 'new-york', 'paris', 'tokyo', 'dubai')
moodYesMood preset. Valid values: adventurous, romantic, relaxing, family_fun, cultural, thrill_seeking, foodie, budget_friendly, luxury, rainy_day
formatNoResponse format: text (default) or jsontext
languageNoSupported language code for localised booking URLs (e.g. 'en', 'de', 'fr', 'es', 'ja', 'pt-br')en
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=true, covering safety and scope. The description adds valuable context: it explains the tool maps moods to multiple filters (audience, tag, setting, rating, price) and supports 40+ languages for localized URLs, which are behavioral details not in annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first explains the core functionality and mood mapping, the second covers language support and usage examples. Every sentence adds value with zero wasted words, making it easy to scan.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (4 parameters, 100% schema coverage, no output schema), the description is mostly complete. It explains the unique mood-based search approach, usage context, and language support. However, it doesn't detail the output format or result structure, which could be helpful since there's no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all parameters. The description adds minimal param semantics: it mentions mood mapping to filters and language support for booking URLs, but doesn't provide syntax or format details beyond what the schema already covers. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches 'tickadoo experiences by emotional intent instead of category' and maps moods to specific filters, distinguishing it from sibling tools like 'search_experiences' which likely uses different criteria. It specifies the verb 'search' and resource 'tickadoo experiences' with a unique approach.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states 'Use when a user says things like...' with concrete examples (e.g., 'something romantic', 'luxury options in Paris'), providing clear when-to-use guidance. It implicitly contrasts with category-based searches by emphasizing emotional intent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_experiencesA
Read-only
Inspect

Search for shows, theatre, events, tours and experiences in a specific city on tickadoo. Supports optional free-text query matching against titles and descriptions, optional category filtering (theatre, musicals, tours, food, family, nightlife, sightseeing, concerts, comedy, shows, outdoor, workshops, cruises, sports), optional min/max price filtering in the local currency, optional date filtering with dateFrom/dateTo, and optional sorting (relevance, popular, price_low, price_high, rating, best_value). Supports 40+ languages — pass a language code (e.g. 'de', 'fr', 'es', 'ja') to get localised booking URLs. Use when a user asks what to do in a city, wants event/show recommendations, or is looking for tickets.

ParametersJSON Schema
NameRequiredDescriptionDefault
cityYesCity name or slug (e.g. 'london', 'new-york', 'paris', 'tokyo', 'dubai')
sortNoOptional result ordering. Valid values: relevance, popular, price_low, price_high, rating, best_value. "popular" prioritises experiences with price, imagery, rating >= 4.0, and a description.relevance
tagsNoOptional comma-separated tag filter. Results must match at least one tag. Valid tags: Musical, WestEnd, WalkingTour, FoodTour, Museum, Outdoor, HiddenGem, MustSee, Bestseller, Cruise, DayTrip, SkipTheLine, HopOnHopOff, WaterSport, Spa, BikeTour, Adventure, GuidedTour, Attraction, Transfer, SelfGuided, KidsAttraction, Show, Concert, Helicopter, WhaleWatching, Dining, Workshop, NightLife, Safari, Evening, Morning, Seasonal
queryNoOptional free-text filter matched against experience title and description (e.g. 'ghost tour', 'pizza', 'harry potter')
dateToNoOptional end date filter in ISO date format YYYY-MM-DD (e.g. '2026-03-28'). Must be used together with dateFrom.
formatNoResponse format: text (default) or jsontext
offsetNoPagination offset. Skip this many results before returning. Use with max_results for cursor-based pagination.
settingNoOptional indoor/outdoor filter. Use Indoor for rainy days.
audienceNoOptional comma-separated audience filter. Valid values: Family, Couples, AdultsOnly, Kids, Seniors, Groups, Solo
categoryNoOptional category filter. Valid values: theatre, musicals, tours, food, family, nightlife, sightseeing, concerts, comedy, shows, outdoor, workshops, cruises, sports. Matching is fuzzy, so singular forms like "tour" still map to "tours" internally.
dateFromNoOptional start date filter in ISO date format YYYY-MM-DD (e.g. '2026-03-27'). Must be used together with dateTo.
languageNoSupported language code for localised booking URLs (e.g. 'en', 'de', 'fr', 'es', 'ja', 'pt-br')en
max_priceNoOptional maximum price in the experience's local currency
min_priceNoOptional minimum price in the experience's local currency
min_ratingNoMinimum rating (e.g. 4.5 for top-rated experiences only)
max_resultsNoMaximum number of experiences to return (default 12, max 200)
max_durationNoMaximum duration in minutes (e.g. 120 for under 2 hours)
min_durationNoMinimum duration in minutes (e.g. 60 for at least 1 hour)
physical_levelNoFilter by physical difficulty level
free_cancellationNoFilter for experiences with free cancellation (true) or non-refundable (false)
available_languageNoFilter by language availability. ISO 639-1 code: en, es, fr, de, ja, zh, pt, it, ko, etc.
wheelchair_accessibleNoFilter for wheelchair-accessible experiences only
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, openWorldHint=true, and destructiveHint=false, indicating a safe, read-only operation. The description adds valuable context beyond this: it mentions support for 40+ languages with localized booking URLs, which is useful behavioral information not covered by annotations. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, starting with the core functionality and listing key filters. Every sentence adds value, such as the language support and usage guidelines, with no wasted words. It could be slightly more structured but remains efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (21 parameters) and rich schema with 100% coverage, the description provides sufficient context. It explains the tool's purpose, usage, and key features like language support. However, without an output schema, it doesn't detail return values, leaving a minor gap in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all 21 parameters. The description lists optional filters (e.g., category, price, date, sorting) but doesn't add syntax or format details beyond what the schema provides. It offers a high-level overview, but the schema carries the heavy lifting, warranting a baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches for 'shows, theatre, events, tours and experiences in a specific city on tickadoo,' providing a specific verb ('search') and resource ('experiences'). It distinguishes from siblings by focusing on comprehensive search with multiple filters, unlike more specific tools like 'get_experience_details' or 'whats_on_tonight'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'Use when a user asks what to do in a city, wants event/show recommendations, or is looking for tickets.' This provides clear context and distinguishes it from alternatives like 'get_city_guide' or 'search_by_mood' by emphasizing its role in finding bookable experiences.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whats_on_tonightA
Read-only
Inspect

Find bookable experiences happening later today in a city. Automatically filters to today, removes already-started events, adds "starts in" countdowns, surfaces urgency signals from inventory data, and sorts by soonest start time with evening/show/nightlife boosts. Supports 40+ languages — pass a language code (e.g. 'de', 'fr', 'es', 'ja') to get localised booking URLs. Use for concierge-style requests like "what's on tonight in London?" or "any shows tonight in Paris?".

ParametersJSON Schema
NameRequiredDescriptionDefault
cityYesCity name or slug (e.g. 'london', 'new-york', 'paris', 'tokyo', 'dubai')
formatNoResponse format: text (default) or jsontext
categoryNoOptional category filter. Valid values: theatre, musicals, tours, food, family, nightlife, sightseeing, concerts, comedy, shows, outdoor, workshops, cruises, sports.
languageNoSupported language code for localised booking URLs (e.g. 'en', 'de', 'fr', 'es', 'ja', 'pt-br')en
max_resultsNoMaximum number of experiences to return (default 10, max 25)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it explains automatic filtering (today-only, removes already-started events), sorting logic (by soonest start time with boosts), urgency signals from inventory data, and multi-language support. Annotations cover read-only and non-destructive aspects, but the description enriches understanding of the tool's operational behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured: the first sentence states the core purpose, followed by key features in a logical flow (filtering, sorting, language support), and ends with usage examples. Every sentence adds value without redundancy, making it easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (5 parameters, no output schema), the description is largely complete: it covers purpose, usage, behavioral traits, and language support. However, it doesn't detail the output format or error handling, which could be helpful since there's no output schema. Annotations provide safety context, but some operational gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents all parameters thoroughly. The description adds minimal parameter semantics—it mentions language support for localized URLs but doesn't explain other parameters beyond what the schema provides. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Find bookable experiences happening later today in a city.' It specifies the verb ('find'), resource ('bookable experiences'), and temporal scope ('later today'), and distinguishes it from siblings like 'get_whats_on_this_week' by focusing on today's events with urgency features.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance with examples: 'Use for concierge-style requests like "what's on tonight in London?" or "any shows tonight in Paris?"' It also implicitly distinguishes from siblings by focusing on tonight's events with urgency signals, unlike broader tools like 'search_experiences' or 'get_whats_on_this_week'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.