MolTravel
Server Details
Aggregated travel MCP — flights, tours, activities, price checks, visas, and more.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- navifare/moltravel-mcp
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.6/5 across 56 of 56 tools scored. Lowest: 2.2/5.
Tools are grouped by providers (e.g., skiplagged, tourradar) with clear domains, reducing confusion within groups. However, there is overlap across providers (e.g., multiple flight search tools like kiwi_search-flight and skiplagged_sk_flights_search) and some tools like travel_agent and data_status have broad or ambiguous purposes that could lead to misselection.
Naming is highly inconsistent, mixing snake_case (airlines_lookup), kebab-case (tourradar_b2b-cities-search), and provider prefixes with varying delimiters. There is no uniform verb_noun pattern; some tools use verbs like 'lookup' or 'search', while others are descriptive phrases like 'format_flight_pricecheck_request'. This lack of convention makes the set harder to navigate.
With 56 tools, the count is excessive for a single server, indicating poor scoping. While travel is a broad domain, the server aggregates multiple third-party APIs (e.g., Kiwi, Skiplagged, Tourradar) without a cohesive focus, leading to bloat. A more modular approach with separate servers would be more appropriate.
The tool set covers a wide range of travel-related functions, including flights, accommodations, tours, visas, and ferries, with good depth in areas like tour searches and flight comparisons. Minor gaps exist, such as limited car rental or travel insurance tools, but agents can work around these given the comprehensive coverage of core travel planning needs.
Available Tools
56 toolsairlines_lookupBInspect
Look up an airline by IATA (2-char) or ICAO (3-char) code.
| Name | Required | Description | Default |
|---|---|---|---|
| code | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but only specifies input format expectations (2-char vs 3-char). It fails to mention error handling (e.g., invalid codes), data freshness, or what constitutes a successful match.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly concise at 9 words. Every element earns its place: the verb 'Look up', the resource 'airline', and the critical format specifications '(2-char)' and '(3-char)'. No waste, front-loaded action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a single-parameter lookup tool with an output schema (which handles return value documentation). However, with zero schema coverage and no annotations, the description should ideally address validation behavior or error states for invalid codes.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Excellent compensation for 0% schema description coverage. The description crucially clarifies that the 'code' parameter accepts either IATA (2-character) or ICAO (3-character) formats, providing essential semantic meaning entirely absent from the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Look up') and resource ('airline') with specific identifier types (IATA/ICAO codes). It implicitly distinguishes from sibling 'airlines_search' by specifying code-based lookup versus likely text-based search, though it doesn't explicitly name the alternative.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context (when you have an IATA or ICAO code) but provides no explicit when-to-use guidance, no exclusions, and doesn't mention the sibling 'airlines_search' as an alternative for name-based searches.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
airlines_searchBInspect
Search airlines by name. Returns up to 20 results.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | ||
| country | No | ||
| active_only | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the 20-result limit ('Returns up to 20 results'), which is valuable behavioral context not found in structured fields. However, it lacks details on matching behavior (exact vs. fuzzy), rate limits, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is optimally concise with two sentences: the first establishes the operation, the second states the result limit. No redundancy or filler text is present; every word serves a purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite the presence of an output schema (reducing the need for return value documentation), the description is incomplete due to zero schema coverage on three input parameters. It fails to document the majority of the tool's inputs, creating significant gaps for an agent attempting to construct valid calls.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate. While 'by name' implies the 'query' parameter's purpose, the description completely omits explanation of 'country' (filtering scope) and 'active_only' (status filtering), leaving two of three parameters undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (Search), resource (airlines), and method (by name). However, it does not explicitly differentiate from the sibling tool 'airlines_lookup', leaving ambiguity about when to use search versus lookup.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'airlines_lookup', nor are prerequisites or exclusion criteria mentioned. The agent must infer usage context solely from the tool name.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
airports_lookupAInspect
Look up an airport by IATA (3-char) or ICAO (4-char) code. Returns full details including runways.
| Name | Required | Description | Default |
|---|---|---|---|
| code | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses return value characteristics ('full details including runways'), indicating rich data retrieval. However, it omits mention of error behavior for invalid codes, rate limits, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. First sentence covers purpose and input semantics; second covers output characteristics. Information density is optimal with no redundant or filler text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for a single-parameter lookup tool where an output schema exists. The description adds value by highlighting runway data in the returns, suggesting comprehensive airport records. Could be improved by noting error states for invalid codes.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0% description coverage for the 'code' parameter. The description effectively compensates by specifying expected formats (IATA 3-char, ICAO 4-char) and validation types, though it doesn't explicitly map these constraints to the parameter name.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool looks up airports by specific code types (IATA/ICAO). Distinguishes from airline-related siblings implicitly by specifying 'airport' and from 'airports_search' via the code-based lookup mechanism, though it doesn't explicitly name sibling alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides excellent input format guidance (3-char IATA vs 4-char ICAO) within the description. However, lacks explicit guidance on when to use this versus 'airports_search' or 'airports_near' for different lookup scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
airports_nearCInspect
Find airports near a geographic point. Returns results sorted by distance.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| latitude | Yes | ||
| longitude | Yes | ||
| radius_km | No | ||
| include_small | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully notes that results are 'sorted by distance,' which is valuable behavioral context, but fails to mention other critical traits like whether small airports are excluded by default, rate limits, or the read-only nature of the operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with two efficient sentences and zero redundancy. However, given the lack of schema descriptions and annotations, this brevity results in underspecification rather than clarity, preventing a perfect score.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with five parameters and zero schema descriptions, the description is insufficiently complete. It fails to explain the boolean 'include_small' parameter, default values for radius_km and limit, or the output format (despite an output schema existing that could have been referenced descriptively).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Given 0% schema description coverage, the description inadequately compensates for undocumented parameters. While 'geographic point' implies the purpose of latitude and longitude, and 'distance' hints at radius_km, it completely omits semantics for 'include_small' (what defines 'small'?) and 'limit' (max results?), leaving three of five parameters effectively undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Find[s] airports near a geographic point' with a specific verb and resource, and the phrase 'near a geographic point' effectively distinguishes it from sibling tools like airports_lookup and airports_search. However, it misses a perfect score by not explicitly contrasting with these siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no explicit guidance on when to use this tool versus alternatives like airports_search or airports_lookup. While 'geographic point' implies coordinate-based usage, there is no 'when to use/when not to use' instruction or mention of prerequisites like requiring latitude/longitude values.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
airports_searchBInspect
Search airports by name or city. Returns up to 20 results sorted by size.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | ||
| country | No | ||
| type_filter | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It successfully discloses the result limit ('up to 20 results') and sorting behavior ('sorted by size'), but lacks information on error handling, empty result behavior, or rate limiting.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. The first sentence establishes the core purpose and the second provides critical behavioral constraints (pagination/sorting). Perfectly front-loaded and appropriately sized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a search tool with an output schema (which handles return value documentation), but incomplete regarding input parameters. Given the zero schema coverage, the description should have mentioned the filtering capabilities of 'country' and 'type_filter' parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description must compensate for all three parameters. It only implicitly covers 'query' via 'name or city', leaving 'country' and 'type_filter' completely undocumented with no hints about expected formats or valid values.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the verb (search), resource (airports), and method (by name or city). The 'name or city' qualifier effectively distinguishes this from sibling 'airports_lookup' which likely expects exact codes/identifiers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this versus 'airports_lookup' or 'airports_near'. While the distinction is implied by the search method described, the agent must infer when to prefer search over lookup.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
data_statusAInspect
Check which static datasets (airports, airlines, visas) are loaded and their row counts.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses behavioral traits by specifying the tool returns 'row counts' and checks 'loaded' status. However, it lacks safety profile disclosure (read-only nature, performance characteristics, or auth requirements) that would be necessary given the absence of annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with action verb 'Check'. Zero redundant words. Specific parenthetical enumeration of datasets adds precision without verbosity. Every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter diagnostic tool with an output schema present, the description is complete. It specifies exactly which datasets are inspected (aligning with sibling tool domains) and what metrics are returned (row counts), making it sufficient for agent selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 0 parameters. Per rubric, 0 params = baseline 4. Description appropriately makes no mention of parameters since none exist.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent clarity. Uses specific verb 'Check' with explicit resource 'static datasets' and enumerates the exact datasets monitored (airports, airlines, visas). Clearly distinguishes this meta/diagnostic tool from operational siblings like airlines_lookup or visa_check.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or when-not-to-use guidance provided. However, the diagnostic nature (checking loaded status and row counts) implies usage for data health verification before querying, but lacks explicit guidance like 'use before querying datasets to verify availability'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fcdo_list_countriesBInspect
[fcdo] List all countries with UK FCDO travel advice.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. While 'List' implies a read-only operation, the description omits safety details (e.g., idempotency), rate limits, or data freshness. The scope 'all countries' is stated, but pagination behavior or response size expectations are not addressed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of a single, efficient sentence with no redundant information. The '[fcdo]' prefix serves as a namespace identifier without cluttering the semantic content, and every word directly contributes to understanding the tool's scope.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that an output schema exists (covering return values) and the input schema is empty, the description adequately covers the tool's purpose for its low complexity. However, lacking annotations, it could have benefited from a brief note confirming this is a safe, non-destructive read operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, establishing a baseline score of 4. The description appropriately requires no parameter explanation, as the tool is a simple, unfiltered list operation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('List') and resource ('countries with UK FCDO travel advice'), making the tool's function immediately apparent. However, it does not explicitly differentiate from its sibling 'fcdo_travel_advice' (which presumably retrieves detailed advice for specific countries), leaving the relationship between the tools implied rather than stated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, nor does it mention the sibling 'fcdo_travel_advice' as a follow-up for retrieving specific country details. There are no stated prerequisites or exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fcdo_travel_adviceBInspect
[fcdo] Get UK FCDO travel advice for a specific country. Includes safety, entry requirements, health, and warnings.
| Name | Required | Description | Default |
|---|---|---|---|
| country | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It successfully discloses content categories covered (safety, entry requirements, health, warnings), giving agents insight into return data structure. However, it lacks operational details like read-only nature, update frequency of FCDO data, or whether the advice is official/legal guidance.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences with zero waste. Front-loaded with the core action '[fcdo] Get UK FCDO travel advice', followed immediately by value-add content categories. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for a single-parameter lookup tool with an output schema (which handles return value documentation). Covers the domain (FCDO), scope (country-specific), and content types. Only gap is parameter format specification, which is noted in parameter semantics.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0% with no parameter descriptions. The description mentions 'for a specific country' but fails to specify expected format (ISO 3166-1 alpha-2 code, full country name, etc.) or case sensitivity. With zero schema coverage, the description must compensate for this gap but doesn't.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb-resource pair ('Get UK FCDO travel advice') and distinguishes from sibling fcdo_list_countries by targeting a specific country. However, it doesn't explicitly clarify this advice is specifically intended for UK citizens/travelers, which would strengthen differentiation from general country info tools like restcountries_country_info.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use guidance or alternative suggestions. The description mentions 'entry requirements' which overlaps functionally with visa_check and visa_summary siblings, but provides no guidance on choosing between them. No mention of prerequisite steps (e.g., using fcdo_list_countries to validate country availability).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ferryhopper_get_direct_connections_for_portsCInspect
[ferryhopper] Get a list of all the direct connections between ports
| Name | Required | Description | Default |
|---|---|---|---|
| portLocation | Yes | Location name or search term used to find matching ports (not limited to exact port codes). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure but offers almost none. It does not clarify what constitutes a 'direct connection' (routes? lines? schedules?), whether the operation is read-only, or what the return structure contains.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single efficient sentence. The '[ferryhopper]' prefix adds noise but minimal overhead. It is appropriately front-loaded with the action verb.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter lookup tool with complete schema coverage and no output schema, the description is minimally adequate. However, it lacks domain context explaining that 'connections' represent reachable destination ports from the queried location.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for its single parameter ('portLocation'), establishing a baseline of 3. The description adds no parameter-specific context, but given the schema completeness, no compensation is required.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'direct connections between ports' with a specific verb ('Get') and resource. However, it fails to distinguish from sibling tool 'ferryhopper_search_trips'—leaving ambiguity about whether this returns route metadata versus searchable trip options.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus 'ferryhopper_search_trips' or 'ferryhopper_get_ports'. There is no mention of prerequisites (e.g., whether the port must exist) or when to prefer direct connections over trip searches.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ferryhopper_get_portsBInspect
[ferryhopper] Get a list of global ports and their details
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full disclosure burden. It mentions 'global' scope (useful) but fails to clarify pagination behavior, rate limits, what specific 'details' are returned, or whether the data is cached. For a data-retrieval tool with no output schema, this lacks necessary behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with the namespace prefix followed by the action. It is appropriately brief for a simple catalog tool, though the extreme brevity leaves room for additional context about output format without becoming verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema and annotations, the description should compensate by explaining what port 'details' include (IDs, names, locations) and noting this is a prerequisite for other ferryhopper tools. It meets minimum viability but leaves significant gaps for an integration tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With zero parameters, the baseline score applies. The description appropriately indicates this is a parameterless list operation requiring no filters, which aligns with the empty input schema. No additional parameter guidance is needed or possible.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get') and resource ('list of global ports and their details'), distinguishing it from sibling ferryhopper tools like 'ferryhopper_search_trips' and 'ferryhopper_get_direct_connections_for_ports' by being the catalog/listing operation. The '[ferryhopper]' prefix helps namespace it among airline/airport siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description implies this is a lookup tool (necessary for obtaining port identifiers to use with 'ferryhopper_search_trips'), it lacks explicit guidance on when to call this versus the connection-specific tool or how it relates to the trip search workflow. Usage must be inferred from sibling tool names.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ferryhopper_search_tripsCInspect
[ferryhopper] Get a list available ferry trips between two ports on a specific date
| Name | Required | Description | Default |
|---|---|---|---|
| date | Yes | Departure date in ISO format YYYY-MM-DD (e.g. 2026-03-15). | |
| arrivalLocation | Yes | Arrival location as a human-readable name or search term (e.g. city, port name), not a port code. | |
| departureLocation | Yes | Departure location as a human-readable name or search term (e.g. city, port name), not a port code. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full disclosure burden. While 'Get a list' implies a read-only search operation, the description fails to specify what data is returned (prices, schedules, operators, availability), whether pagination is supported, or any rate limiting concerns.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, appropriately front-loaded with the action verb 'Get'. Minor grammatical error ('list available' instead of 'list of available') prevents a perfect score, but overall efficient with minimal waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given three simple parameters with complete schema coverage and no nested objects, the input requirements are adequately addressed. However, lacking both annotations and an output schema, the description should ideally characterize the return structure (e.g., trip options with times/prices) to be fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with detailed descriptions for each parameter (ISO date format, human-readable location names). The description mentions 'between two ports' and 'specific date' which reinforces the schema semantics but adds no additional guidance on parameter interactions or validation rules beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves ferry trips (specific resource) between two ports on a specific date, distinguishing it from flight search siblings (kiwi_search-flight, skiplagged_sk_flights_search) and other ferryhopper tools like get_ports or get_direct_connections_for_ports. The '[ferryhopper]' prefix aids identification but slightly dilutes the sentence structure.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this versus alternatives. It does not clarify when to use this date-specific search versus 'ferryhopper_get_direct_connections_for_ports' (which finds general routes without dates) or whether users should validate ports first using 'ferryhopper_get_ports'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
kiwi_feedback-to-devsCInspect
[kiwi] Send feedback to the dev of the Kiwi MCP server.
| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes | The content of the feedback. Don't hesitate to include any text relevant to the issue (logs, error message) if you are having one. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It fails to explain what happens to the feedback after submission (e.g., creates a ticket, sends email, stored in database), whether the operation is idempotent, or if there are rate limits. The description only states the intent, not the mechanism or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately brief at one sentence. However, the '[kiwi]' prefix at the beginning appears to be categorization metadata leaking into the description field rather than essential descriptive content, slightly reducing the structural quality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (single string parameter, no output schema), the description is minimally adequate. However, for a feedback submission tool with no annotations, it should ideally disclose what confirmation or response the user can expect, or whether the feedback is anonymous.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input parameter 'text' is fully documented in the schema itself, including guidance on including logs and error messages. The tool description does not mention parameters, which is acceptable given the schema completeness. Baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (send feedback) and target (dev of the Kiwi MCP server), effectively distinguishing it from the numerous travel-related sibling tools (airlines, hotels, tours, etc.). However, the '[kiwi]' prefix appears to be metadata leakage rather than descriptive content, and it doesn't specify what types of feedback are appropriate (bugs vs. feature requests).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, or under what circumstances (e.g., encountering bugs, errors, or feature requests). While there are no direct sibling alternatives for feedback submission, the description fails to indicate appropriate triggers for invocation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
kiwi_search-flightAInspect
[kiwi]
Search for a flight
Description
Uses the Kiwi API to search for available flights between two locations on a specific date.
How it works
The tool will:
Search for matching locations to resolve airport codes
Find available flights for the specified route and date range
Method
Call this tool whenever a user wants to search for flights, regardless of whether they provided exact airport codes or just city names.
You should display the returned results in a markdown table format: Group the results by price (those who are the cheapest), duration (those who are the shortest, i.e. have the smallest 'totalDurationInSeconds') and the rest (those that could still be interesting).
Always display for each flight in order:
In the 1st column: The departure and arrival airports, including layovers (e.g. "Paris CDG → Barcelona BCN → Lisbon LIS")
In the 2nd column: The departure and arrival dates & times in the local timezones, and duration of the flight (e.g. "03/08 06:05 → 09:30 (3h 25m)", use 'durationInSeconds' to display the duration and not 'totalDurationInSeconds')
In the 3rd column: The cabin class (e.g. "Economy")
(In case of return flight only) In the 4th column: The return flight departure and arrival airports, including layovers (e.g. "Paris CDG → Barcelona BCN → Lisbon LIS")
(In case of return flight only) In the 5th column: The return flight departure and arrival dates & times in the local timezones, and duration of the flight (e.g. "03/08 06:05 → 09:30 (3h 25m)", use 'return.durationInSeconds' to display the duration)
(In case of return flight only) In the 6th column: The return flight cabin class (e.g. "Economy")
In the previous-to-last column: The total price of the flight
In the last column: The deep link to book the flight
Finally, provide a summary highlighting the best prices, the shortest flights and a recommendation. End wishing a nice trip to the user with a short fun fact about the destination!
| Name | Required | Description | Default |
|---|---|---|---|
| curr | No | Currency for response (examples: EUR, USD, GBP, JPY, CAD, AUD, NZD, CHF etc.) | EUR |
| sort | No | Sort results by: price, duration, quality or date (default: date) | date |
| flyTo | Yes | Location to fly to: It could be a city or an airport name or code | |
| locale | No | Language of city names and kiwi.com website links (examples: en, uk, de, fr, es, it, ru etc.) | en |
| flyFrom | Yes | Location to fly from: It could be a city or an airport name or code | |
| cabinClass | No | Cabin class: M (economy), W (economy premium), C (business), F (first class) | |
| passengers | No | Passengers details. The total number of passengers must be between 1 and 9. There must be at least one adult. There must be at least one adult per infant. | |
| returnDate | No | Return date in dd/mm/yyyy format | |
| departureDate | Yes | Departure date in dd/mm/yyyy format | |
| returnDateFlexRange | No | Return date flexibility range in days (0 to 3 days before/after the selected return date) | |
| departureDateFlexRange | No | Departure date flexibility range in days (0 to 3 days before/after the selected departure date) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses that the tool performs location resolution ('Search for matching locations to resolve airport codes') and searches date ranges. However, it omits safety profile (read-only vs destructive), rate limits, error behaviors, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Severely bloated with extensive output formatting instructions (markdown table specifications, column ordering, fun facts) that belong in system prompts rather than tool descriptions. While front-loaded with purpose, the majority of text distracts from tool selection and invocation logic.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking an output schema, the description comprehensively documents the return data structure through detailed display instructions (mentioning fields like durationInSeconds, totalDurationInSeconds, deep links). Missing only error handling and edge case behaviors for full completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description mentions accepting city names or airport codes, but this merely restates the schema descriptions for flyFrom/flyTo ('It could be a city or an airport name or code'), adding no new semantic value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool 'searches for available flights between two locations' using the Kiwi API. However, it does not distinguish from sibling flight search tools like 'skiplagged_sk_flights_search', leaving ambiguity about which flight provider to use.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use guidance: 'Call this tool whenever a user wants to search for flights, regardless of whether they provided exact airport codes or just city names.' Lacks exclusions or explicit comparison to alternative flight search siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
peek_experience_availabilityBInspect
[peek] Get availability information for a specific experience including dates, times, and pricing
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | The ID of the experience | |
| endDate | Yes | End date inclusive in YYYY-MM-DD format (e.g., '2025-06-20' would return things taking place ON or BEFORE the 20th) | |
| quantity | Yes | Number of travelers | |
| startDate | Yes | Start date inclusive YYYY-MM-DD format (e.g., '2025-06-19') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. While it lists what data is returned (dates, times, pricing), it fails to indicate if this is a safe read-only operation, whether it requires specific permissions, rate limits, or error handling behavior (e.g., what happens if the ID is invalid).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with no redundant words. The '[peek]' prefix appears to be a provider tag, and the remainder immediately conveys the action and return value without waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description partially compensates by listing the types of data returned (dates, times, pricing), but does not describe the response structure or format. With complete input schema coverage but no annotations, the description meets minimum viability but lacks operational context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents all four parameters (id, quantity, startDate, endDate). The description implies the 'id' and date parameters by referencing 'specific experience' and 'dates', but adds no syntax or semantic details beyond the schema, warranting the baseline score of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Get[s] availability information for a specific experience' with specific outputs (dates, times, pricing). However, it does not explicitly differentiate from sibling 'peek_experience_details', which likely retrieves general metadata rather than availability.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'peek_experience_details' or 'peek_search_experiences'. It does not mention prerequisites (e.g., needing an experience ID from search results) or exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
peek_experience_detailsCInspect
[peek] Get detailed information about a specific experience by ID
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | The ID of the experience to retrieve |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure, but offers almost none. It does not confirm the operation is read-only (though 'Get' implies it), describe rate limits, caching behavior, or—critically—what 'detailed information' includes (description, photos, pricing, location) given the absence of an output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately brief and front-loaded with the verb. The '[peek]' prefix appears to be metadata or branding leakage that adds no semantic value, but the single sentence efficiently conveys the core operation without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter lookup tool, the description is minimally viable. However, given the lack of an output schema and the domain-specific term 'experience' (referring to Peek's tour/activity inventory), the description should ideally sketch what details are returned to help the agent assess utility.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description mentions 'by ID', reinforcing the parameter's purpose, but adds no further semantics regarding ID format, where to obtain valid IDs, or validation rules beyond what the schema already states.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Get[s] detailed information about a specific experience by ID', providing a specific verb, resource, and scope (by ID). It implicitly distinguishes from sibling 'peek_search_experiences' (search vs. lookup by ID), though it could more explicitly differentiate from 'peek_experience_availability'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'peek_search_experiences' (which should be used first to obtain an ID) or 'peek_experience_availability' (for scheduling data). There are no prerequisites or workflow hints.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
peek_list_tagsBInspect
[peek] List all category tags
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, yet the description fails to disclose behavioral traits such as whether results are cached, pagination limits, rate limiting, or what data structure is returned (strings vs objects). For a zero-parameter tool, this metadata is essential since the schema conveys no information.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately brief for a simple list operation. However, the '[peek]' prefix is semantically wasteful—it appears to be metadata or branding that belongs in the title field (which is null) rather than the functional description.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the tool is simple (no input params), the lack of output schema means the description should indicate the return format (e.g., array of tag names/IDs). Given no annotations and no output schema, the description is adequate but minimal—missing context on what constitutes a 'category tag' in this domain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters. According to evaluation rules, zero parameters establishes a baseline score of 4. The description doesn't need to compensate for missing schema documentation since there are no parameters to document.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states a specific verb ('List') and resource ('category tags'), and the tool name 'peek_list_tags' combined with sibling tools (peek_search_experiences, etc.) clarifies this relates to Peek activity categories. However, it doesn't explicitly state these are experience/activity category tags versus other taxonomies.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus the other peek_* tools like peek_search_experiences. Doesn't indicate if this is a prerequisite step for category discovery before searching, or merely a reference utility.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
peek_render_activity_tilesAInspect
[peek] Render activity tiles for a list of activity IDs, returning an embeddable widget URI
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | ID or comma separate list of activity IDs to render as tiles |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and successfully discloses the critical behavioral trait that it returns a URI/reference (embeddable widget) rather than raw activity data or rendered HTML. However, it omits cache behavior, auth requirements, or error handling for invalid IDs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with the action verb. Every clause adds value. Minor deduction for the '[peek]' prefix which serves as branding/context but slightly disrupts the flow without adding functional meaning.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description appropriately compensates by specifying the return value type ('embeddable widget URI'). For a single-parameter tool with high schema coverage, the description provides sufficient context for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the baseline is met. The description reinforces that the IDs represent 'activity tiles' but does not add syntax details (like comma-separation) beyond what the schema already provides. It confirms the logical input is a list.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Render') with clear resource ('activity tiles') and explicitly distinguishes from sibling peek tools by stating the unique output format ('embeddable widget URI'). The '[peek]' prefix also signals the platform context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description implies this is for embedding/visual display purposes via 'embeddable widget URI', it lacks explicit when-to-use guidance contrasting it with siblings like peek_experience_details or peek_search_experiences. No alternatives or exclusions are named.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
peek_search_experiencesBInspect
[peek] Search for travel experiences with comprehensive filtering options. Returns available categories, tags, and regions with IDs for further filtering.
| Name | Required | Description | Default |
|---|---|---|---|
| query | No | When the user wants something w/ a specific keyword (bike, beer, art, etc) limit to experiences whose title contain a keyword. Never include location information. | |
| tagId | No | When you have determined the user is interest in a specific vibe of activity (family friendly, romantic, etc) limit to only experiences with a specific tag (single tag ID) | |
| latLng | No | When the user wants something NEAR a specific place, but not necessarily IN a specific place, limit to only those near a given lat_lng. ex: "37.7799,-122.2822". Don't use this for regions, instead use the search_regions and provide a region id. this is a good fallback if a specific region is lacking inventory. | |
| endDate | No | Return experiences that are available on or before this date in YYYY-MM-DD format (e.g., '2025-06-20' would return things taking place ON or BEFORE the 20th) | |
| regionId | No | When you have determined the user wants something in a specific region (found w/ search_regions) limit to only a specific region ID | |
| startDate | No | Return experiences that are available on or after this date. YYYY-MM-DD format (e.g., '2025-06-19') | |
| categoryId | No | Limit to only a specific activity category |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It partially compensates by disclosing what the return payload contains ('categories, tags, and regions with IDs'), which is valuable given the lack of an output schema. However, it fails to disclose safety characteristics (read-only vs. destructive), rate limits, or pagination behavior expected for a search tool with 7 optional parameters.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately brief at two sentences. The critical information (search action and return structure) is front-loaded, though the '[peek]' vendor prefix in the first sentence is structural noise that doesn't aid the LLM. No redundant or wasted sentences beyond that prefix.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 7 parameters with 100% schema coverage, the description doesn't need to elaborate on inputs. With no output schema, it adequately compensates by describing the return values (categories, tags, regions with IDs). However, for a complex search tool with no annotations, it should disclose the read-only/safe nature of the operation and any pagination logic, which are absent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description references 'comprehensive filtering options,' which collectively acknowledges the parameter purpose, but adds no specific semantic detail about individual parameters (e.g., date formats, coordinate precision) beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the core action ('Search for travel experiences') and resource type. The '[peek]' prefix is noise, but the first sentence establishes the verb and object clearly. It partially distinguishes from siblings like peek_search_regions by noting it returns IDs for 'further filtering,' implying this is a discovery tool, though it could more explicitly contrast with peek_experience_details.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'for further filtering' implies this is an initial discovery step before using other tools (like peek_experience_details), providing implicit workflow guidance. However, it lacks explicit 'when to use' guidance, prerequisites, or named alternatives. It does not clarify why one should use this versus peek_list_tags or peek_search_regions directly.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
peek_search_regionsCInspect
[peek] Search for regions by name
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of regions to return (default: 50) | |
| query | Yes | Search query to match against region names |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It fails to indicate whether results are paginated, what fields are returned (IDs, coordinates, names?), rate limits, or authentication requirements. The '[peek]' tag hints at the service provider but provides no operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise at six words with no filler. However, given the lack of annotations and output schema, this brevity may constitute under-specification rather than efficient communication. The '[peek]' prefix at the start is useful categorization.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Inadequate for a search tool with no output schema. The description should explain what data is returned (region IDs needed for downstream 'peek_experience_*' calls?) and how results relate to the broader Peek tool ecosystem. Without annotations or output schema, the description must compensate more.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema adequately documents both parameters. The description adds minimal semantic value beyond the schema, merely restating that the tool searches by name. Baseline score applies since schema descriptions are complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (search) and resource (regions) with the '[peek]' prefix distinguishing it from non-Peek siblings. However, it lacks specificity about what constitutes a 'region' in the Peek context (e.g., geographic areas vs. administrative boundaries) which would help differentiate it from 'peek_search_experiences'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives like 'peek_search_experiences' or 'airports_search'. Given the sibling ecosystem, explicit guidance on whether regions are prerequisites for experience searches would be valuable.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
restcountries_country_infoCInspect
[restcountries] Look up country information from REST Countries.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | ||
| fields | No | ||
| search_by | No | name |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions the external API (REST Countries) but discloses no behavioral traits such as rate limits, caching behavior, what happens when a country is not found, or what specific data fields are returned (though an output schema exists).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single sentence is appropriately concise and front-loaded with the essential purpose. However, given the complete lack of parameter documentation in the schema, the description is inappropriately brief—conciseness becomes under-specification when critical usage details are omitted.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Inadequate for the tool's complexity. With 3 parameters, 0% schema coverage, no annotations, and a non-obvious query syntax (REST Countries API supports multiple search modes), the description should explain parameters and search behavior. It provides only the minimal purpose statement.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description completely fails to compensate. It does not explain the required 'query' parameter (expected format: full name, ISO code?), the 'fields' parameter (filtering syntax?), or valid values for 'search_by' (only 'name' or others?). This is a critical gap for tool invocation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States the basic action (look up) and resource (country information) and identifies the data source (REST Countries), which provides some specificity. However, it fails to distinguish what type of country information this provides (demographics, geography, flags) versus sibling tools like fcdo_travel_advice or tourradar country lists.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus alternatives like fcdo_list_countries for travel advice or tourradar for tour destinations. No mention of prerequisites, search syntax requirements, or exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
skiplagged_sk_cars_searchAInspect
[skiplagged] Search Skiplagged for rental cars between pickup and dropoff locations/dates, returning prices, companies, and car details.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of car offers to return | |
| offset | No | Number of car offers to skip for pagination (default: 0) | |
| pickupDate | Yes | Pickup date (YYYY-MM-DD) | |
| pickupTime | No | Pickup time (HH:mm or HH:mm:ss, default 10:00) | 10:00 |
| renderMode | No | Preferred render mode for tool output (ui or text). | |
| dropoffDate | Yes | Dropoff date (YYYY-MM-DD) | |
| dropoffTime | No | Dropoff time (HH:mm or HH:mm:ss, default 10:00) | 10:00 |
| pickupLocation | Yes | Pickup location: airport code (preferred) or "lat,lng" | |
| dropoffLocation | No | Dropoff location (defaults to pickup) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It partially compensates by disclosing return values ('prices, companies, and car details') since no output schema exists, but fails to mention behavioral traits like pagination behavior, caching, rate limits, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the provider identifier [skiplagged] and immediately states the core function. Zero wasted words or redundant phrases.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 9 parameters with complete schema coverage and no output schema, the description adequately compensates by describing the expected return data structure. It appropriately focuses on the search contract, though it could mention the pagination support (limit/offset) explicitly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are well-documented in the schema itself. The description acknowledges the key parameter groups (locations/dates) but does not add semantic relationships (e.g., that dropoffLocation defaults to pickup) or usage constraints beyond the schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (Search), resource (rental cars), and scope (between pickup/dropoff locations/dates). It distinguishes itself from sibling tools like skiplagged_sk_flights_search and skiplagged_sk_hotels_search by explicitly specifying 'rental cars' and the Skiplagged provider prefix.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description implies usage through the car rental domain specification, it lacks explicit guidance on when to use this versus other transportation options (flights, ferries) or prerequisites like requiring valid dates. No alternative tools or exclusion criteria are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
skiplagged_sk_destinations_anywhereBInspect
[skiplagged] Find cheapest destinations from a departure city when flexible about where to go. Perfect for discovering travel opportunities and deals.
| Name | Required | Description | Default |
|---|---|---|---|
| from | Yes | Departure IATA code or city name | |
| adults | No | Number of adult passengers | |
| depart | Yes | Departure date in YYYY-MM-DD format | |
| return | No | Return date in YYYY-MM-DD format (optional for one-way trips) | |
| children | No | Number of child passengers | |
| fare_class | No | Fare class preference | economy |
| infantsLap | No | Number of lap infants | |
| renderMode | No | Preferred render mode for tool output (ui or text). | |
| infantsSeat | No | Number of seat infants |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. However, it fails to explain what data is returned (destination list format, price inclusion), rate limits, caching behavior, or the significance of the 'skiplagged' provider (hidden-city ticketing implications).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The two-sentence structure is appropriately front-loaded with the core function. The '[skiplagged]' prefix is slightly noisy but acceptable for provider identification in a multi-tool environment. No significant waste, though it could be more information-dense given the complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex search tool with 9 parameters and no output schema or annotations, the description is inadequate. It fails to describe the return structure (what does 'destinations' mean—airports, cities, prices?), pagination, or how results are sorted/ranked.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds minimal value beyond the schema by implying the flexible-destination nature (explaining the absence of a 'to' parameter), but doesn't elaborate on date formats, passenger combinations, or the renderMode parameter's practical impact.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (find cheapest destinations), the resource (destinations from a departure city), and the distinctive scope (when flexible about where to go). This effectively differentiates it from sibling tools like skiplagged_sk_flights_search which likely require a specific destination.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use the tool ('when flexible about where to go'), but lacks explicit guidance on when NOT to use it or named alternatives. It doesn't direct users to skiplagged_sk_flights_search for specific routes, leaving the distinction implied rather than explicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
skiplagged_sk_faq_searchBInspect
[skiplagged] Search FAQ for relevant answers about Skiplagged - company, stuff, its products, and policies. Useful for customer support and general inquiries
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | The search query for FAQ retrieval | |
| top_k | No | Number of top articles to return | |
| renderMode | No | Preferred render mode for tool output (ui or text). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. States 'Search' implying read-only, but fails to disclose return format (critical given no output_schema), rate limits, or authentication requirements. Minimal behavioral disclosure beyond operation type.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with the core action, and the [skiplagged] prefix aids namespacing. However, includes vague filler ('stuff') that wastes tokens without adding meaning. Reasonably compact but not maximally efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple 3-parameter search tool: identifies the knowledge base (FAQ) and use case. However, lacks description of return values (articles, snippets, URLs?) which is notable given the absence of an output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (query, top_k, renderMode all documented). Description mentions 'search query' implicitly but adds no syntax details, examples, or semantic clarifications beyond what the schema already provides. Baseline 3 appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Search') and resource ('FAQ') with specific scope ('about Skiplagged - company... products, and policies'). Effectively distinguishes from flight/hotel booking siblings (skiplagged_sk_flights_search, skiplagged_sk_hotels_search). Minor deduction for vague filler word 'stuff'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly identifies context ('Useful for customer support and general inquiries'), providing clear when-to-use guidance. Lacks explicit when-not-to-use or named alternatives, but use-case context is specific enough for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
skiplagged_sk_flex_departure_calendarAInspect
[skiplagged] Generate a flexible calendar of the lowest one-way fares around a chosen departure, returning date → cheapest-price entries to help pick the best day to fly. Intended for flexible-date price discovery, not exact itinerary selection.
| Name | Required | Description | Default |
|---|---|---|---|
| sort | No | Sort results chronologically or by lowest price first | date |
| adults | No | Number of adult passengers (default: 1, max: 9) | |
| origin | Yes | Departure city name or IATA code (will be resolved to IATA code automatically) | |
| children | No | Number of child passengers (default: 0, max: 8) | |
| infantsLap | No | Number of lap infants (default: 0, max: 4) | |
| renderMode | No | Preferred render mode for tool output (ui or text). | |
| returnDate | No | Return date for matching trip length (optional, YYYY-MM-DD format) | |
| destination | Yes | Arrival city name or IATA code (will be resolved to IATA code automatically) | |
| infantsSeat | No | Number of seat infants (default: 0, max: 4) | |
| departureDate | Yes | Preferred departure date (YYYY-MM-DD format) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses return format ('date → cheapest-price entries') and scope limitation, but lacks details on search radius (how many days 'around'?), rate limits, caching behavior, or currency/tax inclusion that would help an agent understand operational constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. Front-loaded with vendor prefix. First sentence establishes function and output format; second sentence clarifies intent and boundaries. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given rich schema (100% coverage) and conceptual explanation of return values ('date → cheapest-price entries'), the description adequately covers the tool's purpose. Minor gap: doesn't specify the temporal range of the 'around' search or output formatting details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, baseline is 3. Description adds value by contextualizing departureDate as 'chosen departure' (center of flexible range) rather than exact flight date, and implying one-way focus that clarifies the optional nature of returnDate in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb ('Generate') and resource ('flexible calendar') clearly stated. Distinguishes from siblings by specifying 'one-way fares' vs return calendar, and explicitly contrasting with 'exact itinerary selection' to differentiate from skiplagged_sk_flights_search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use guidance ('flexible-date price discovery') and when-not-to-use ('not exact itinerary selection'). Would be perfect if it named the specific alternative tool (skiplagged_sk_flights_search) rather than just the category.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
skiplagged_sk_flex_return_calendarAInspect
[skiplagged] Generate a flexible round-trip price calendar for a fixed-length stay around a chosen travel window. Returns (depart date, return date, lowest total price) entries for nearby date pairs that preserve the original trip length; intended for price discovery, not exact itinerary selection.
| Name | Required | Description | Default |
|---|---|---|---|
| sort | No | Sort results chronologically or by lowest price first | date |
| adults | No | Number of adult passengers (default: 1, max: 9) | |
| origin | Yes | Departure city name or IATA code (will be resolved to IATA code automatically) | |
| children | No | Number of child passengers (default: 0, max: 8) | |
| infantsLap | No | Number of lap infants (default: 0, max: 4) | |
| renderMode | No | Preferred render mode for tool output (ui or text). | |
| returnDate | Yes | Return date for the reference trip (YYYY-MM-DD format) | |
| destination | Yes | Arrival city name or IATA code (will be resolved to IATA code automatically) | |
| infantsSeat | No | Number of seat infants (default: 0, max: 4) | |
| departureDate | Yes | Preferred departure date (YYYY-MM-DD format) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must carry the full burden. It successfully discloses the return structure (depart date, return date, price tuples) and the core logic (preserving trip length while varying dates). However, it lacks operational details such as search radius ('nearby' is undefined), rate limits, or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two dense sentences with zero waste. First sentence establishes the function and constraints; second sentence describes the return format and intended use. Every clause provides essential information for tool selection.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description adequately compensates by detailing the return tuple structure. With 100% schema coverage and clear explanation of the flex algorithm, it covers the essential domain context. Minor gap: does not specify the temporal range of 'nearby' dates.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% description coverage (baseline 3), the description adds valuable semantic context about the relationship between departureDate and returnDate—that they define a fixed duration that is maintained across alternative date pairs. This explains the 'flex' logic beyond what the schema fields convey individually.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Generate') and clearly identifies the resource ('flexible round-trip price calendar'). It distinguishes itself from sibling tools by emphasizing 'fixed-length stay' and preserving 'original trip length,' implying the difference from skiplagged_sk_flex_departure_calendar and exact search tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states the intended use case ('price discovery') and the non-intended use ('not exact itinerary selection'), providing clear behavioral guidance. However, it does not explicitly name which sibling tool (e.g., skiplagged_sk_flights_search) to use for actual itinerary selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
skiplagged_sk_flights_searchCInspect
[skiplagged] Search Skiplagged for flights between specific locations with filtering options for passengers, fare class, stops, airlines, and timing preferences
| Name | Required | Description | Default |
|---|---|---|---|
| sort | No | Sort flights by price (cheapest first), duration (shortest first), or value (best price/time ratio, default) | value |
| limit | No | Maximum number of flights to return (default: 12, max: 100) | |
| adults | No | Number of adult passengers (default: 1, max: 9) | |
| offset | No | Number of flights to skip for pagination (default: 0) | |
| origin | Yes | Departure location - prefer IATA code | |
| children | No | Number of child passengers (default: 0, max: 8) | |
| maxStops | No | Maximum number of stops (none=nonstop, one=1 stop, many=2+ stops) | |
| fareClass | No | Fare class preference (default: economy) | economy |
| infantsLap | No | Number of lap infants (default: 0, max: 4) | |
| renderMode | No | Preferred render mode for tool output (ui or text). | |
| returnDate | No | ||
| destination | Yes | Arrival location - prefer IATA code | |
| infantsSeat | No | Number of seat infants (default: 0, max: 4) | |
| departureDate | Yes | ||
| arrivalAirports | No | Array of specific arrival airport codes to include (when destination city has multiple airports) | |
| includeStandard | No | Include standard flights (default: true) | |
| excludedAirlines | No | Array of excluded airline codes (e.g., ['F9', 'NK']) | |
| arrivalTimeLatest | No | Latest arrival time in minutes from midnight (0-1439) | |
| departureAirports | No | Array of specific departure airport codes to include (when origin city has multiple airports) | |
| includeHiddenCity | No | Include hidden city flights (default: true) | |
| maxFlightDuration | No | Maximum total flight duration in minutes | |
| preferredAirlines | No | Array of preferred airline codes (e.g., ['UA', 'DL']) | |
| maxLayoverDuration | No | Maximum layover duration in minutes | |
| arrivalTimeEarliest | No | Earliest arrival time in minutes from midnight (0-1439) | |
| departureTimeLatest | No | Latest departure time in minutes from midnight (0-1439) | |
| departureTimeEarliest | No | Earliest departure time in minutes from midnight (0-1439) | |
| includeVirtualInterlining | No | Include virtual interlining flights (default: true) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but fails to indicate whether this tool performs booking or just search, what the return format is (critical given no output schema exists), rate limits, or that Skiplagged specializes in hidden-city ticketing (despite the includeHiddenCity parameter). It only mentions 'filtering options' without explaining side effects or safety profiles.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single efficient sentence that front-loads the core action. The bracketed '[skiplagged]' prefix is slightly redundant given the tool name, but the structure is otherwise tight with no wasted words. Every element contributes to understanding the tool's scope.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the high schema coverage, the description appropriately delegates parameter details to the schema. However, for a complex 27-parameter tool with no output schema and no annotations, the description lacks critical context about return values, pagination behavior (despite offset/limit parameters), or the unique 'hidden city' nature of Skiplagged results. It meets minimum viability but leaves significant gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 93% schema description coverage, the input schema already comprehensively documents the 27 parameters. The description provides a high-level categorization ('filtering options for passengers, fare class...') but adds no syntax details, format examples, or semantic clarifications beyond what the schema already provides, warranting the baseline score for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (Search), the specific service (Skiplagged), and the resource (flights). The phrase 'between specific locations' effectively distinguishes it from the sibling tool 'skiplagged_sk_destinations_anywhere' (flexible destinations), though it doesn't explicitly differentiate from other flight search providers like kiwi_search-flight.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description lists available filter categories (passengers, fare class, etc.) but provides no explicit guidance on when to use this tool versus alternatives such as the sibling calendar tools (skiplagged_sk_flex_departure_calendar) or the 'anywhere' destination search. No prerequisites or exclusions are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
skiplagged_sk_hotel_detailsAInspect
[skiplagged] Fetch room-level availability, pricing, and amenities for a specific hotel and stay dates.
| Name | Required | Description | Default |
|---|---|---|---|
| live | No | Use live rates (includes booking links). Disable for faster, less accurate cached rates. | |
| checkin | Yes | Check-in date (YYYY-MM-DD format) | |
| hotelId | Yes | Hotel ID (from search results) to fetch detailed availability for | |
| checkout | Yes | Check-out date (YYYY-MM-DD format) | |
| numRooms | No | Number of rooms (default: 1, max: 5) | |
| numAdults | No | Number of adults (default: 2, max: 10) | |
| renderMode | No | Preferred render mode for tool output (ui or text). | |
| numChildren | No | Number of children (default: 0, max: 10) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses data granularity ('room-level') and return data types, but omits operational details like rate limits, caching behavior, or failure modes that would help the agent predict tool behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with zero waste. Every phrase earns its place: '[skiplagged]' identifies provider, 'room-level' specifies granularity, and 'specific hotel' clarifies scope. Front-loaded and appropriately sized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, but the description compensates by listing return data categories (availability, pricing, amenities). With 100% input schema coverage, the description provides adequate context for an 8-parameter tool, though behavioral caveats would strengthen it further.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so parameters are fully self-documenting. The description references key concepts (stay dates, specific hotel) that map to parameters but adds no technical syntax, validation rules, or semantic relationships beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Uses specific verb 'Fetch' with clear resource scope ('room-level availability, pricing, and amenities'). The phrase 'specific hotel' effectively distinguishes this from sibling search tools like 'skiplagged_sk_hotels_search' by implying a prior selection step.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage context through 'specific hotel' (suggesting a hotel ID from prior search is required), but lacks explicit workflow guidance, alternatives naming, or prerequisites compared to sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
skiplagged_sk_hotels_searchBInspect
[skiplagged] Search Skiplagged for hotels in a city with specific check-in and check-out dates, including ratings and pricing.
| Name | Required | Description | Default |
|---|---|---|---|
| city | Yes | City name to search for hotels (will be matched against available cities) | |
| sort | No | Sort hotels by price (cheapest first), ranking (highest rated first), value (best value rating), or discount (best deals first, default when deals available) | value |
| limit | No | Maximum number of hotels to return (default: 12, max: 100) | |
| offset | No | Number of hotels to skip for pagination (default: 0) | |
| checkin | Yes | Check-in date (YYYY-MM-DD format) | |
| checkout | Yes | Check-out date (YYYY-MM-DD format) | |
| numRooms | No | Number of rooms (default: 1, max: 9) | |
| numAdults | No | Number of adults (default: 2, max: 10) | |
| renderMode | No | Preferred render mode for tool output (ui or text). | |
| numChildren | No | Number of children (default: 0, max: 10) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses that results include 'ratings and pricing' (useful return value context) and indicates the external Skiplagged dependency. However, it omits pagination behavior, rate limits, caching policies, or real-time vs. cached data characteristics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero waste. It front-loads the action verb ('Search'), identifies the resource ('hotels'), and specifies key constraints ('city', 'dates') immediately. The '[skiplagged]' prefix provides clear namespacing without clutter.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 10 parameters, no output schema, and no annotations, the description meets minimum viability by stating core functionality and return data types (ratings/pricing). However, it should mention pagination capabilities (offset/limit) or sorting options to fully contextualize the search behavior for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description mentions 'city' and 'check-in and check-out dates' aligning with required parameters, but adds no semantic clarity for advanced parameters like 'sort' (price/ranking/value/discount) or 'renderMode' beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches for hotels on Skiplagged by city with check-in/out dates, distinguishing it from sibling flight/car search tools. However, it doesn't explicitly differentiate from 'skiplagged_sk_hotel_details' (list vs. specific hotel lookup), which would strengthen selection clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'skiplagged_sk_hotel_details' or other hotel search tools (e.g., trivago). It states required inputs but lacks 'when-not-to-use' or prerequisite context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
skiplagged_sk_resolve_iataBInspect
[skiplagged] Resolve a city name to a valid IATA code
| Name | Required | Description | Default |
|---|---|---|---|
| input | Yes | City name code to resolve to IATA code | |
| renderMode | No | Preferred render mode for tool output (ui or text). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but fails to mention error handling (what if city not found?), ambiguity resolution (multi-airport cities like 'New York'), return format, or whether matching is fuzzy/exact.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise single sentence with no extraneous information. The '[skiplagged]' prefix is minimal metadata, and the functional description is front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple 2-parameter lookup tool, the description covers the core contract but lacks important context given zero annotations and no output schema—specifically how ambiguous city names are handled and what the return structure looks like.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, documenting both 'input' (city name) and 'renderMode' (output format). The description adds no additional parameter context (e.g., expected city name format), but baseline 3 is appropriate since the schema fully documents the interface.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the transformation (resolve city name → IATA code) with a specific verb and output format. However, it does not explicitly differentiate from the sibling tool 'skiplagged_sk_resolve_location', leaving ambiguity about which resolution tool to use.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like 'skiplagged_sk_resolve_location' or 'airports_lookup'. No mention of prerequisites (e.g., does the city need to be exact match?) or exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
skiplagged_sk_resolve_locationAInspect
[skiplagged] Resolve latitude and longitude to the IATA code of nearest airport and city information
| Name | Required | Description | Default |
|---|---|---|---|
| lat | Yes | latitude | |
| lng | Yes | longitude | |
| renderMode | No | Preferred render mode for tool output (ui or text). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully notes the 'nearest airport' selection behavior (returning one result, not a list) and describes the return values (IATA code and city information) compensating for the missing output schema. However, it omits details on search radius, error handling when no airport is nearby, or rate limiting.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence structure with zero wasted words. The information is front-loaded with the provider namespace '[skiplagged]' followed immediately by the verb and resource mapping. Every clause serves to clarify the transformation from input to output.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (3 simple parameters) and lack of output schema, the description adequately compensates by specifying the return format (IATA code and city information). It successfully communicates the core functionality, though it could be improved by noting the geographic search radius or error response behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage ('latitude', 'longitude', 'Preferred render mode'), establishing a baseline of 3. The description mentions 'latitude and longitude' but does not augment the schema with additional semantic details such as valid coordinate ranges, decimal precision requirements, or examples. It meets but does not exceed the schema's semantic coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description provides specific verb ('Resolve'), clear input resource ('latitude and longitude'), and specific output ('IATA code of nearest airport and city information'). The '[skiplagged]' prefix distinguishes the provider, and the explicit lat/lng input clearly differentiates it from sibling tool 'skiplagged_sk_resolve_iata' which likely performs the inverse operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by specifying coordinate inputs, indicating it should be used when exact latitude/longitude are available. However, it lacks explicit guidance on when to use this versus siblings like 'airports_near' or 'skiplagged_sk_resolve_iata', and does not mention exclusions or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tourradar_algolia-operator-searchAInspect
[tourradar] Use this when you need to find a tour operator's ID by their name for filtering tours.
Searches tour operators by name and returns matching operators with their IDs. Use the returned operator ID with tour search filters.
| Name | Required | Description | Default |
|---|---|---|---|
| textSearch | Yes | Operator name for search, for example `Topdeck` |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full disclosure burden. It adds valuable context about return values ('returns matching operators with their IDs'), but lacks operational details such as whether the search is case-sensitive, supports fuzzy matching, or has rate limiting considerations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences that each earn their place: (1) namespaced invocation context and when-to-use, (2) core function and return value, (3) downstream workflow guidance. Minor redundancy exists between sentences 1 and 2 regarding 'by name' searching, but overall efficiently structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter lookup tool without output schema, the description is sufficiently complete. It explains the input purpose, the matching behavior, and the intended use of the return value (ID for filtering), covering all necessary context for an agent to use this in a multi-step workflow.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage (textSearch is well-documented with an example), the baseline is 3. The description references searching 'by their name' which aligns with the parameter, but adds no additional syntax, format constraints, or semantic details beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific purpose: finding a tour operator's ID by name for filtering tours. It distinguishes itself from sibling tools like 'tourradar_b2b-operator-details' by focusing specifically on the lookup-by-name use case rather than general operator information retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use guidance ('Use this when you need to find a tour operator's ID...') and workflow continuity ('Use the returned operator ID with tour search filters'). However, it could be strengthened by explicitly naming the specific tour search tool (e.g., tourradar_vertex-tour-search) that consumes these IDs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tourradar_b2b-cities-searchAInspect
[tourradar] Use this when you need to find city IDs for tour search filters like start city, end city, or cities to visit.
Searches for cities by name within a specific country. Supports multiple name variants to handle different spellings (e.g., Krakow, Kraków, Cracow).
| Name | Required | Description | Default |
|---|---|---|---|
| search | Yes | City name filter. It is recommended to use this filter to narrow down the search results. Please use english only names. How to search: If you are looking for Kraków city, please pass potential candidates here, like: ['Krakow', 'Kraków', 'Cracow']. It will return all cities that match any of the names. | |
| country_code | Yes | ISO 3166-1 alpha-2 country code (e.g., 'US' for USA, 'GB' for United Kingdom, 'DE' for Germany). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully explains the fuzzy matching behavior for city names ('supports multiple name variants to handle different spellings'), which is critical for correct invocation. It lacks details on error handling or exact return structure, but the core behavioral trait (variant matching) is well documented.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two highly efficient sentences with no redundancy. It front-loads the '[tourradar]' namespace tag and immediate usage guidance, followed by behavioral specifics and concrete examples. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 parameters, 100% schema coverage, no nested objects), the description provides adequate context. It clarifies the output purpose (city IDs) despite lacking an output schema, though explicitly describing the return format would strengthen completeness further.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds value beyond the schema by illustrating the specific technique for using the 'search' parameter (providing multiple spelling variants like 'Krakow', 'Kraków', 'Cracow'), which helps the agent understand the intended parameter usage pattern.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches for cities to find IDs for use in tour search filters (start city, end city, cities to visit). It distinguishes itself from sibling tools like 'tourradar_vertex-tour-search' by explicitly stating its role as a lookup service for filter parameters rather than a tour discovery tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use the tool ('when you need to find city IDs for tour search filters'), providing clear workflow context. However, it lacks explicit 'when not to use' guidance or named alternative tools for cases where users already have city IDs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tourradar_b2b-continents-listAInspect
[tourradar] Use this when you need continent IDs for filtering tours by region.
Returns a list of all supported continents with their IDs and names for use in tour search filters.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It correctly identifies the operation as read-only by stating it 'Returns a list,' and specifies the payload contents (IDs and names). However, it lacks details on rate limits, caching behavior, authentication requirements, or error conditions that would fully inform an agent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with zero waste. It is front-loaded with usage context ('Use this when...'), followed by behavioral specifics ('Returns a list...'). Every sentence earns its place by providing distinct value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (zero parameters, simple list return) and absence of an output schema, the description adequately compensates by explaining what data is returned (continents with IDs and names) and its purpose (tour search filters). It is complete enough for a lookup utility, though it could benefit from mentioning data freshness or pagination.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters (empty object). According to the evaluation baseline, tools with zero parameters receive a baseline score of 4, as there are no parameter semantics to elaborate upon in the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool 'Returns a list of all supported continents with their IDs and names,' providing a specific verb (returns/list) and resource (continents). It clearly distinguishes itself from sibling tools like tourradar_b2b-countries-list by specifying 'continent IDs for filtering tours by region' versus country or city lookups.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The opening sentence '[tourradar] Use this when you need continent IDs for filtering tours by region' provides explicit when-to-use guidance. However, it lacks explicit when-not-to-use guidance or named alternatives (e.g., it doesn't mention to use countries-list for country-level filtering instead).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tourradar_b2b-countries-listAInspect
[tourradar] Use this when you need to look up country IDs for filtering tours or validating country names.
Returns a complete list of all supported countries with their IDs, names, and ISO country codes.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses return structure (IDs, names, ISO codes) and implies it's a read operation, but lacks details on data freshness, payload size, caching, or rate limits that would help an agent understand operational characteristics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. First sentence provides usage guidance, second describes return values. Efficient structure with critical information front-loaded (including the [tourradar] namespace tag).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter reference tool without output schema, the description adequately covers the return structure (IDs, names, ISO codes) and use cases. Minor gap regarding data volume or update frequency prevents a 5.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 0 parameters. With zero parameters, baseline score is 4 as no parameter semantic clarification is required or possible.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool looks up 'country IDs for filtering tours or validating country names' and returns 'a complete list of all supported countries.' Specific verb+resource combination distinguishes it from sibling tools like cities-search, continents-list, and tour-search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use ('when you need to look up country IDs for filtering tours or validating country names'), providing clear context for selection. Lacks explicit 'when not to use' or named alternative tools, preventing a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tourradar_b2b-currencies-listAInspect
[tourradar] Use this when you need to check supported currencies or get currency details like symbols.
Returns a list of all supported currencies with their codes (USD, EUR), names, and symbols ($, €).
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adequately describes the return value structure ('list of all supported currencies with their codes... names, and symbols'), but lacks safety indicators (read-only status, side effects, rate limits) that would help an agent understand operational constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of exactly two high-value sentences: one for usage conditions and one for return value specification. The '[tourradar]' prefix serves as useful namespacing metadata. No redundancy or filler text is present.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (zero parameters, no output schema), the description provides sufficient context by detailing the return format (codes, names, symbols). For a static lookup endpoint, this coverage is adequate, though mention of B2B-specific constraints would elevate it to a 5.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters. According to scoring rules, 0 parameters establishes a baseline score of 4. The description correctly does not invent parameters, maintaining alignment with the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'check[s] supported currencies or get[s] currency details like symbols,' providing a specific verb + resource combination. It distinguishes from siblings like tourradar_b2b-countries-list and tourradar_b2b-languages-list by explicitly focusing on currency codes and symbols.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description opens with explicit usage guidance: 'Use this when you need to check supported currencies...' providing clear when-to-use context. While it does not explicitly state when NOT to use it or name alternatives, no sibling tools provide currency lookup functionality, making this omission minor.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tourradar_b2b-languages-listAInspect
[tourradar] Use this when you need language IDs for filtering tours by guide language.
Returns a list of all supported languages with their IDs, codes, and names for use in tour search filters.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully compensates for the missing output schema by describing the return structure (IDs, codes, and names). However, it lacks other critical behavioral context such as rate limits, caching behavior, authentication requirements, or explicit confirmation that this is a read-only operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two highly efficient sentences. The first establishes the use case context, and the second describes the return payload. There is no redundant text, and the '[tourradar]' prefix effectively namespaces the tool without cluttering the descriptive content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (zero parameters, simple list return), the description is nearly complete. It explains the return values since no output schema exists and connects the tool to its ecosystem (tour search filters). It could be improved by explicitly naming the specific parameters in sibling tools that accept these language IDs.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, establishing a baseline score of 4. The description correctly omits parameter discussion since none exist, requiring no additional semantic clarification beyond what the empty schema already communicates.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns a list of supported languages with their IDs, codes, and names. It specifies the exact resource (languages) and action (returns/list), and distinguishes itself from siblings by clarifying its specific purpose: obtaining language IDs for filtering tours by guide language.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit when-to-use guidance ('Use this when you need language IDs for filtering tours by guide language') and context ('for use in tour search filters'). However, it does not explicitly name sibling tools (like tourradar_vertex-tour-search) that consume these IDs or describe when not to use this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tourradar_b2b-operator-detailsAInspect
[tourradar] Use this when the user wants information about a tour operator, such as their terms and policies.
Fetches operator details including name, code, and terms & conditions based on operator ID.
| Name | Required | Description | Default |
|---|---|---|---|
| operatorId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It compensates partially by listing specific return fields (name, code, terms), but omits safety classification (read-only vs destructive), error handling (invalid operatorId), or rate limiting context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with clear separation of usage context (first) and functional description (second). The '[tourradar]' tag at the start is unnecessary clutter given the tool name convention, but overall efficient without redundant phrasing.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple single-parameter lookup tool. Covers the essential contract: input (operatorId), output fields (name, code, terms), and trigger condition. Missing only the origin/source of valid operator IDs, which would complete the usage loop.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0% (operatorId lacks description). The text mentions 'based on operator ID' which maps to the parameter, but fails to fully compensate by explaining what constitutes a valid operator ID or where to obtain it (e.g., from search results). Just meets baseline expectations.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the resource (tour operator) and specific data retrieved (name, code, terms & conditions). It implicitly distinguishes from siblings like 'tourradar_b2b-tour-details' (tours vs operators) and 'tourradar_algolia-operator-search' (lookup by ID vs search). The '[tourradar]' prefix is redundant noise.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides a clear trigger ('when the user wants information about a tour operator'), but fails to mention the critical sibling relationship with 'tourradar_algolia-operator-search'—users typically need to search/find operators before looking up details by ID. No 'when not to use' guidance provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tourradar_b2b-tour-departuresAInspect
[tourradar] Use this when the user asks about available departure dates, pricing for specific dates, or needs to validate if a departure date is available.
Returns a list of departures for a specific tour within a date range, including availability status, pricing, and booking information.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number. Default is 1 | |
| tourId | Yes | Tour ID | |
| dateRange | Yes | Returns only departure dates in the desired range |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully describes the return payload ('list of departures... including availability status, pricing, and booking information'), but fails to disclose operational traits like read-only safety, pagination limits, or error handling behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two highly efficient sentences. The first is front-loaded with the trigger condition, and the second explains the return value. No redundancy or filler text exists; every word serves a distinct purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description adequately compensates by detailing what the tool returns (departures with availability, pricing, and booking info). Minor gap: it does not mention that tourId typically comes from companion search tools, but overall coverage is sufficient for a list/query tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all three parameters (tourId, dateRange, page). The description references 'specific tour' and 'date range' which align with parameters, but adds no additional semantic context (e.g., date format details, pagination behavior) beyond what the schema already provides. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool's purpose with specific verbs ('returns') and resources ('departures', 'availability status', 'pricing'). It distinguishes from siblings like tourradar_b2b-tour-details (general info) and tourradar_web-tour-booking (reservation) by focusing specifically on listing available departure dates within a range.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The first sentence provides explicit when-to-use guidance ('when the user asks about available departure dates, pricing for specific dates, or needs to validate if a departure date is available'). However, it lacks explicit when-NOT-to-use guidance or mention of prerequisite steps (e.g., obtaining tourId from search tools).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tourradar_b2b-tour-detailsBInspect
[tourradar] Use this when the user wants to see detailed information about a specific tour.
Fetches comprehensive tour details including itinerary, pricing, operator info, images, and booking links based on tour ID.
| Name | Required | Description | Default |
|---|---|---|---|
| tourId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must carry the full burden. It compensates partially by listing return fields (itinerary, pricing, images, booking links), but fails to disclose safety properties (read-only vs destructive), authentication requirements, or error conditions. 'Fetches' implies a read operation, but this isn't explicitly confirmed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no wasted words. The '[tourradar]' prefix serves as useful namespacing. The structure front-loads the usage trigger and follows with the technical capability. Slight deduction for the bracketed prefix being slightly informal.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a single-parameter tool with no output schema, as it describes the return payload contents. However, gaps remain: it doesn't explain the parameter's semantics despite zero schema documentation, and doesn't clarify relationships with the numerous sibling tourradar tools that also require tour IDs.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 0% description coverage for the tourId parameter. The description mentions 'based on tour ID' which acknowledges the parameter's existence, but doesn't explain what a tour ID represents, where to obtain it (e.g., from vertex-tour-search), or the exclusiveMinimum constraint. Barely meets minimum viability for undocumented schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it 'Fetches comprehensive tour details' with specific examples (itinerary, pricing, operator info), distinguishing it from general search tools. However, it doesn't explicitly differentiate from sibling tour-specific tools like tour-departures or tour-faq.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It provides a clear 'Use this when...' trigger for detailed tour information, but lacks explicit guidance on when NOT to use it (e.g., for availability dates use tour-departures instead) and doesn't mention the prerequisite of obtaining a tourId from search tools first.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tourradar_b2b-tour-faqAInspect
[tourradar] Use this when the user has questions about a tour that might be answered in the FAQ section.
Returns a paginated list of frequently asked questions and answers about a specific tour, covering topics like inclusions, requirements, and policies.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number. Default is 1 | |
| tourId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It successfully discloses pagination behavior ('paginated list') and content scope ('covering topics like inclusions, requirements, and policies'), but lacks information on rate limits, error handling, or data freshness.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with zero waste. It is front-loaded with usage context ('Use this when...') followed by return value description, making it easy for an agent to quickly assess relevance.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simple 2-parameter schema and lack of output schema, the description adequately covers the essential usage trigger and return format. However, it could improve by explicitly describing the required tourId parameter and pagination limits.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 50% (page is documented, tourId is not). The description implies the tourId parameter through 'specific tour' but does not explicitly document its semantics, format, or how to obtain it. This meets baseline expectations for partial coverage without full compensation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns a 'paginated list of frequently asked questions and answers about a specific tour' (specific verb+resource). It distinguishes itself from sibling tools like tourradar_b2b-tour-details by explicitly scoping to FAQ content rather than general tour information.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description opens with explicit when-to-use guidance: 'Use this when the user has questions about a tour that might be answered in the FAQ section.' However, it does not explicitly name alternatives (e.g., when to prefer tourradar_b2b-tour-details over this tool) or provide exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tourradar_b2b-tour-mapAInspect
[tourradar] Use this when the user wants to see tour routes on a map or compare itineraries visually.
Fetches tour details for one or more tours and prepares map data with all itinerary locations, coordinates, and day-by-day information for visual display.
| Name | Required | Description | Default |
|---|---|---|---|
| tourIds | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the data transformation aspect (preparing map data with day-by-day coordinates) but omits safety profiles (read-only vs destructive), output format details, or rate limiting context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. The first sentence front-loads the usage condition, while the second explains the technical behavior. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the single-parameter simplicity and lack of output schema, the description adequately covers the tool's specific niche (map visualization). However, with 0% schema coverage, it should ideally explain the tourId format or sourcing to be complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate. It references 'one or more tours' which aligns with the tourIds array parameter and minItems: 1 constraint, providing basic semantic context. However, it does not explain what tourIds represent or where to obtain them.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Fetches', 'prepares') and clearly identifies the resource (tour map data with itinerary locations and coordinates). It effectively distinguishes from sibling tourradar_b2b-tour-details by emphasizing 'map', 'visual display', and 'coordinates'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The first sentence provides explicit when-to-use guidance ('when the user wants to see tour routes on a map or compare itineraries visually'). However, it does not explicitly name the alternative tool (tour-details) for non-visual data retrieval.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tourradar_b2b-tour-types-listAInspect
[tourradar] Use this when you need tour type IDs for filtering tours by category like adventure, cultural, or wildlife.
Returns a hierarchical list of tour type groups and their individual types with IDs and names.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It adds valuable behavioral context by specifying the return structure is 'hierarchical' (groups containing individual types). However, it omits other behavioral traits like idempotency, potential data volume, or error conditions that would be expected for a read-only reference tool without annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely efficient two-sentence structure: front-loaded with use case (when you need IDs for filtering) followed by return value specification (hierarchical list). No redundant words; every clause delivers essential disambiguating information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter lookup tool, the description adequately compensates for the missing output schema by describing the hierarchical return structure and included fields (IDs and names). A 5 would require either an output schema or richer detail about the data format/limits.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters, warranting the baseline score of 4. The description appropriately focuses on return value semantics rather than inventing parameter documentation where none exists.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool returns 'tour type IDs for filtering tours by category' with concrete examples (adventure, cultural, wildlife), distinguishing it from sibling tour search tools (tourradar_vertex-tour-search) and other reference lists (cities, countries). The '[tourradar]' prefix identifies the domain.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit 'Use this when...' guidance for obtaining filter IDs, establishing clear context for when an agent needs categorical taxonomy vs. specific tour details. Lacks explicit 'when not to use' or named alternatives, though the functional distinction from search tools is implied.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tourradar_general-current-dateAInspect
[tourradar] Use this when you need to know the current date, especially before setting departure date filters.
Returns the current date and time in ISO format. Essential for calculating valid date ranges for tour searches.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It specifies the ISO format return type, which is valuable, but omits other behavioral traits like timezone handling (UTC vs local), idempotency, or whether the result is cached.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of three dense sentences with zero redundancy. It front-loads the usage context ('Use this when...') before detailing the return format and operational importance, ensuring every clause provides actionable information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (zero parameters, no output schema), the description adequately covers the essential context: return format (ISO), use case (date filter calculations), and namespace. A minor gap is the lack of timezone specification, but this is sufficient for agent operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, establishing a baseline of 4. The description appropriately does not invent parameters, maintaining consistency with the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Returns the current date and time' and specifies the '[tourradar]' namespace, distinguishing it from sibling travel search tools like 'tourradar_vertex-tour-search' which perform searches rather than returning temporal reference data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It provides explicit temporal context for usage ('especially before setting departure date filters' and 'Essential for calculating valid date ranges'), clearly indicating when to invoke it relative to tour search operations, though it does not explicitly name alternative tools to avoid.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tourradar_vertex-review-searchAInspect
[tourradar] Search tour reviews using AI-powered semantic search. Requires tourIds to scope results to specific tours. Use this when the user asks about reviews, feedback, or experiences for specific tours. Combine with an optional text query to find reviews mentioning specific topics (e.g., 'food', 'guide', 'accommodation'). When you don't have tour IDs, use vertex-tour-search or vertex-tour-title-search first to find them.
| Name | Required | Description | Default |
|---|---|---|---|
| query | No | Short natural language search query to find specific review topics. Keep it concise — do NOT expand into keyword lists. Use simple phrases like 'food quality', 'guide', 'hotel issues'. The search engine handles semantic matching automatically. Prefer using 'keywords' parameter instead when user intent maps to specific topics. Leave empty to return all reviews. | |
| tourIds | No | Tour IDs to scope reviews to specific tours. Use vertex-tour-title-search to find tour IDs if needed. | |
| keywords | No | Array of keyword phrases to search reviews for. These are joined with OR logic to find reviews matching any of the given topics. Use short phrases (e.g., ['food quality', 'meals', 'restaurant']). When provided, this takes priority over 'query'. | |
| pageSize | No | Number of reviews to return (1-20, default 10) | |
| pageToken | No | Token for fetching the next page of results. Use the nextPageToken value from a previous response. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden and successfully discloses the AI-powered semantic matching behavior and the hard requirement for tourIds to scope results. Minor gap: doesn't mention pagination behavior or result limits, though pageToken implies it.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Five sentences with zero waste: tool identity, key constraint, usage trigger, query mechanics, and fallback workflow. Information is perfectly front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 5-parameter schema with 100% coverage and no output schema, the description adequately covers the search workflow and constraints. Minor deduction for not clarifying what the return structure looks like (e.g., review objects vs snippets).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. The description adds practical usage context with concrete examples ('food', 'guide', 'accommodation') and explains the dependency relationship between query and tourIds that pure schema descriptions don't convey.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb+resource+mechanism ('Search tour reviews using AI-powered semantic search') and clearly distinguishes from sibling tour-search tools by focusing on 'reviews' rather than tour metadata.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use ('when the user asks about reviews, feedback, or experiences') and provides clear workflow guidance with named alternatives ('When you don't have tour IDs, use vertex-tour-search or vertex-tour-title-search first').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tourradar_vertex-tour-searchAInspect
[tourradar] Use this when the user describes what they want in natural language and you need AI-powered semantic search to understand their intent.
Before use this tool, please READ all possible filters. PLESE USE FILTERS, when can be used, to make search faster and much more precise. Please use start_city, end_city, cities, countries, start_country, end_country filters if possible. You can use multiple of them.
AI-powered semantic search for tours using natural language queries combined with optional filters. Uses Google Vertex AI to understand intent and find relevant tours based on descriptions, themes, or specific requests.
Use vertex-tour-search when:
The user describes what they want in natural language
You need semantic/AI-powered search to understand intent
Combining natural language with filters for refined results
Examples:
"Family-friendly safari with kids under 12"
"Romantic honeymoon trip with beach and mountains"
"Adventure tour with hiking and camping for beginners"
"Cultural immersion experience with local homestays"
"Wine tasting tour through European countryside"
Input
Required
textSearch: Natural language description of what the user is looking fordisplay_mode: How to display results —"listing"(default, carousel of tour cards) or"map"(interactive map view)
Optional Filters
Location Filters
Filter | Type | Description |
| string[] | Country where tour BEGINS (ISO 3166-1 alpha-2 codes). OR logic. |
| string[] | Country where tour ENDS (ISO 3166-1 alpha-2 codes). OR logic. |
| number[] | City IDs where tour starts. Use |
| number[] | City IDs where tour ends. Use |
| object | Countries visited DURING the itinerary. Supports AND/OR operator. |
| object | Cities visited on the itinerary. Supports AND/OR operator. |
Range Filters
Filter | Type | Description |
|
| Tour length in days |
|
| Maximum group size range |
|
| Minimum group size range |
|
| Minimum age requirement range. E.g., |
|
| Maximum age limit range. E.g., |
|
| Price range (currency: "EUR") |
AND/OR Filters
These filters support both AND and OR operators:
Filter | Values | Description |
| YYYY-MM strings | Filter by departure months |
| ISO 3166-1 alpha-2 codes | Countries visited during itinerary |
| City IDs | Cities visited on itinerary |
Structure: { values: [...], operator: "AND" | "OR" }
OR(default): Tour matches ANY of the specified valuesAND: Tour must match ALL specified values
Examples
Simple text search
{ "textSearch": "family adventure with wildlife" }With location filters
{
"textSearch": "hiking adventure",
"start_country": ["DE", "AT"],
"countries": { "values": ["IT", "CH"], "operator": "AND" }
}With range filters
{
"textSearch": "luxury beach vacation",
"duration": { "min": 7, "max": 14 },
"price": { "min": 2000, "max": 5000, "currency": "EUR" },
"max_group_size": { "min": 1, "max": 16 }
}With age filters
{
"textSearch": "family safari with young children",
"min_age": { "min": 1, "max": 6 },
"duration": { "min": 7, "max": 14 }
}With departure dates
{
"textSearch": "northern lights tour",
"departures": { "values": ["2026-01", "2026-02", "2026-03"], "operator": "OR" }
}Map display mode
{
"textSearch": "hiking tours in the Alps",
"display_mode": "map",
"countries": { "values": ["AT", "CH"], "operator": "OR" }
}Response
Returns a list of tours matching the query, each containing:
Tour ID, name, and URL
Operator information
Brief description matching the query context
| Name | Required | Description | Default |
|---|---|---|---|
| price | No | Filter by tour price. E.g., { min: 500, max: 2000, currency: 'EUR' } finds tours priced between €500 and €2000. Min and max value cannot be this same. | |
| cities | No | Filter tours by cities visited on the itinerary. City IDs can be obtained from the b2b-cities-search tool | |
| max_age | No | Filter by tour's maximum age limit. E.g., { min: 18, max: 39 } finds tours limited to young adults. Use to find age-restricted tours like youth or senior-specific trips. Min and max value cannot be this same. | |
| min_age | No | Filter by tour's minimum age requirement. E.g., { min: 1, max: 12 } finds tours that accept children under 12. Use to find family-friendly tours or tours with low age requirements. Min and max value cannot be this same. | |
| tourIds | No | Filter by specific tour IDs. Use this to narrow search results to known tours. | |
| duration | No | Filter tours by duration in days. Example: set min: 5 , max: 8 to find duration between those days. Include this filter when is needed | |
| end_city | No | City IDs where the tour ends (e.g., [1234, 5678]). Use b2b-cities-search to find city IDs. If passing multiple values, will find tours ending in any of the given cities | |
| countries | No | Filter by countries visited DURING the tour itinerary. Use 'start_country'/'end_country' for departure/destination countries. | |
| departures | No | Filter tours by available departure months | |
| start_city | No | City IDs where the tour starts (e.g., [1234, 5678]). Use b2b-cities-search to find city IDs. If passing multiple values, will find tours starting in any of the given cities | |
| textSearch | Yes | ||
| end_country | No | Filter by the country where the tour ENDS (final destination). ISO 3166-1 alpha-2 codes (e.g., ['IT', 'FR']). Multiple values = OR logic. Use 'countries' filter for countries visited during the itinerary. | |
| display_mode | No | How to display the search results. 'listing' shows a carousel of tour cards. 'map' shows tours on an interactive map. | listing |
| operator_ids | No | Filter tours by specific operator ids. You can add multiple operator ids, then OR operator will be used. For lookup for partner by name, please use algolia-operator-search tool. | |
| start_country | No | Filter by the country where the tour BEGINS (departure point). ISO 3166-1 alpha-2 codes (e.g., ['DE', 'AT']). Multiple values = OR logic. Use 'countries' filter for countries visited during the itinerary. | |
| max_group_size | No | Filter by tour's maximum group size. E.g., { min: 1, max: 20 } finds small group tours with max 20 participants. Min and max value cannot be this same. | |
| min_group_size | No | Filter by tour's minimum group size. E.g., { min: 1, max: 1 } finds tours that accept solo travelers. Min and max value cannot be this same. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses Google Vertex AI implementation and performance implications ('make search faster'), plus response structure (tour ID, name, URL, operator). Missing explicit read-only declaration, rate limits, pagination behavior, or error handling details expected for a complex search tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Appropriately structured with markdown tables and code blocks for a 17-parameter tool. Front-loaded with usage context. Minor redundancy between opening paragraph and 'Use vertex-tour-search when:' section, and typos ('PLESE', 'Before use') slightly detract from polish.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive coverage compensating for lack of output schema by manually documenting response fields (tour ID, operator info, description). Extensive examples cover simple text search, location filters, range filters, and display modes, fully contextualizing the complex filter combinations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is high (94%), establishing baseline 3. Description adds significant value through concrete JSON examples for complex nested filters (duration, price, age ranges), tabular organization of 17 parameters, and critical dependency notes (e.g., using 'b2b-cities-search' to obtain city IDs for filters).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Explicitly defines the tool as 'AI-powered semantic search for tours using natural language queries' and distinguishes it from siblings by specifying use cases like 'when the user describes what they want in natural language' (contrasting with vertex-tour-title-search for exact titles and b2b-tour-details for specific IDs).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear 'when to use' criteria (natural language queries, semantic intent understanding) and explicitly mentions complementary tools like 'b2b-cities-search' for resolving city IDs. Lacks explicit 'when not to use' guidance contrasting with sibling tools like vertex-tour-title-search.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tourradar_vertex-tour-title-searchAInspect
[tourradar] Search for tours by title using AI-powered semantic search. Returns a list of matching tour IDs and titles. Use this when you need to look up a tour by name. When you know tour id, use b2b-tour-details tool to display details about specific tour
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Tour title or partial title to search for |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses AI-powered semantic nature (distinguishes from keyword search) and return format (list of IDs/titles). Implies read-only via 'Search' but doesn't explicitly state safety/destructive profile.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with zero waste: purpose prefix, return value, usage condition, and alternative tool. Front-loaded with specific capability (AI semantic search). Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter search tool, description is complete: explains return values (compensating for missing output schema), establishes workflow with sibling tool (b2b-tour-details), and covers the AI behavioral aspect. No gaps given low complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description adds value by implying 'AI-powered semantic search' allows natural language queries beyond exact title matches, adding semantic context not present in schema's 'Tour title or partial title' description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb+resource: 'Search for tours by title using AI-powered semantic search.' Distinguishes from sibling tourradar_vertex-tour-search by specifying 'by title' and 'AI-powered semantic' method. Also clarifies return values (tour IDs and titles).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit when-to-use: 'Use this when you need to look up a tour by name.' Explicit alternative named: 'When you know tour id, use b2b-tour-details tool' - creating clear workflow from search to details.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tourradar_web-tour-bookingAInspect
[tourradar] Use this when the user explicitly wants to book a tour and has provided their contact details.
Creates a booking request for a specific tour. Requires real contact details (email, names; phone optional). Always confirm the departure date by checking departures first. Never fabricate or assume user data - always ask for missing information.
| Name | Required | Description | Default |
|---|---|---|---|
| Yes | Customer email address. If email is not given by the customer, please ask him before executing this endpoint. Never generate email by yourself, always ask customer for give the email before calling this tool. | ||
| tourId | Yes | Tour ID | |
| lastName | Yes | ||
| firstName | Yes | ||
| paxAdults | Yes | Number of the adults. Always ask customer for this information, is important to have it | |
| paxChildren | Yes | Number of the children | |
| phoneNumber | No | Customer phone number in E-164 format. Is optional, but recommended to provide it. Never put fake phone number, always ask customer for give the phone number before calling this tool. If customer do not gives phone number, just skip this field | |
| departureDate | Yes | Departure date, from customer selection | |
| agentInformation | Yes | Agent information about the booking. Describe here what customer is looked for, preferences etc, what you can recognize. Will be helpful for our Customer Support team to understand the customer needs. | |
| extraInformation | Yes | Customer extra information from the customer or agent. Fill here if customer has provided any additional information about the booking. This is optional and can be empty. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses data integrity requirements well ('Requires real contact details', 'Never fabricate or assume user data'). States it creates a booking request. However, fails to disclose operational side effects: whether this triggers payment processing, confirmation emails, availability holds, or what error states look like. For a mutation tool with no annotations, this is a moderate gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Exceptionally well-structured and front-loaded. Opens with activation condition, follows with core action, then requirements, prerequisite workflow, and data constraints. Four sentences with zero waste—every clause provides distinct value (when to use, what it does, required data, workflow step, validation rule).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 9 required parameters and no output schema/annotations, the description adequately covers input prerequisites (departure date verification, contact collection) and data constraints. However, it completely omits what the tool returns or what happens after invocation (confirmation number? email sent? booking status?). For a complex booking operation, this is a significant gap in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 80% (high), establishing baseline 3. The description adds workflow context beyond schema: 'phone optional' clarifies the optional parameter among required contact fields, and 'Never fabricate... always ask for missing information' provides validation constraints not captured in JSON schema. It mentions 'names' compensating slightly for firstName/lastName lacking schema descriptions, but doesn't add format details or semantics for agentInformation/extraInformation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it 'Creates a booking request for a specific tour' with specific verb and resource. The prefix '[tourradar]' and the explicit distinction from search operations (via 'when the user explicitly wants to book') effectively differentiates it from sibling search tools like tourradar_vertex-tour-search. However, it could clarify whether this completes a purchase or merely submits an inquiry.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit activation criteria: 'when the user explicitly wants to book a tour and has provided their contact details.' Includes critical prerequisite workflow: 'Always confirm the departure date by checking departures first' (implicitly referencing tourradar_b2b-tour-departures). Lacks explicit naming of alternative tools for searching vs booking, though the intent is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tourradar_web-tour-send-brochureAInspect
[tourradar] Use this when the user requests a tour brochure or PDF to be sent to their email.
Sends a tour brochure (PDF file) to the specified email address. Before using this, ensure the user has provided their email address. If not provided, ask the user for it. Never generate or assume email addresses.
| Name | Required | Description | Default |
|---|---|---|---|
| Yes | User email address. If email is not given by the user, please ask him before executing this endpoint. Never generate email by yourself, always ask user for give the email before calling this tool. | ||
| tourId | Yes | Tour ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that the tool sends an email with a PDF and emphasizes the critical constraint that email addresses must be user-provided (never assumed). However, it lacks disclosure of idempotency, rate limits, error handling for invalid emails, or whether this action registers the user for marketing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with the trigger condition front-loaded ('Use this when...'). The content is efficient with minimal waste, though slightly redundant between the first sentence ('requests a tour brochure...be sent') and second ('Sends a tour brochure'). Overall, every sentence earns its place by providing distinct guidance on purpose, action, or constraints.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's limited complexity (2 primitive parameters, no output schema), the description is appropriately complete. It covers the essential safety-critical constraint (email verification) that would otherwise be missing. A perfect score would require additional behavioral details like error handling or rate limiting, but the coverage is sufficient for safe invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds valuable usage context beyond the schema by reinforcing the conversational workflow: 'ensure the user has provided their email address' and 'ask him before executing'. This contextualizes the raw parameter requirements into agent behavior guidance.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool 'Sends a tour brochure (PDF file) to the specified email address' using a specific verb and resource. It clearly distinguishes from sibling tools like tourradar_web-tour-booking or tourradar_b2b-tour-details by focusing on brochure/PDF delivery via email rather than booking or retrieving tour metadata.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use ('when the user requests a tour brochure or PDF'), prerequisites ('ensure the user has provided their email address'), and prohibitions ('Never generate or assume email addresses'). It also includes actionable guidance on handling missing data ('If not provided, ask the user for it').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
travel_agentAInspect
Ask a travel question in natural language. Routes to the right tools automatically and returns a combined answer. Example: 'Cheapest flights from Zurich to Rome next week, and do I need a visa?'
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and successfully discloses the core orchestration behavior ('Routes to the right tools automatically') and aggregation ('returns a combined answer'). It does not mention error handling if no tools match or latency implications of the routing layer.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences followed by a high-value example. The first sentence establishes the natural language interface, the second explains the routing mechanism, and the example immediately clarifies the multi-tool capability without redundant exposition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema and the tool's role as an orchestrator across dozens of siblings, the description adequately establishes it as the primary natural language entry point. It could be improved by enumerating the domains covered (flights, hotels, visas, etc.), but the example implies broad coverage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0% description coverage (only title 'Query'). The description compensates effectively by specifying the parameter accepts 'natural language' and provides a concrete example showing the expected syntax and complexity level, though it omits constraints like length limits.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool accepts natural language travel questions and automatically routes to appropriate backend tools, distinguishing it from the 40+ specific API siblings (e.g., kiwi_search-flight, visa_check) which require structured inputs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The example query ('Cheapest flights... and do I need a visa?') effectively demonstrates the intended use case for complex, multi-domain questions that would otherwise require multiple tool calls. However, it lacks explicit guidance on when to use specific single-purpose tools instead.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
trivago_trivago-accommodation-radius-searchAInspect
[trivago] Search for accommodations and hotels using coordinates and radius on Trivago.
Use this tool when you want to find accommodations near a specific location or street.
The coordinates refer to the search target (e.g. a landmark, neighborhood, or address the user is asking about), NOT the user's personal location.
Derive coordinates from the place or landmark mentioned in the user's query.
Additional Information:
{
"knownInformation": {
"currentYear": "2026",
"today": "2026-04-17"
}
}
| Name | Required | Description | Default |
|---|---|---|---|
| rooms | No | The number of rooms, Number of rooms must be lower than or equal to the number of adults | |
| adults | No | The number of adults. | |
| radius | Yes | The radius in meters that you want to search for. | |
| arrival | Yes | The arrival date in YYYY-MM-DD format, today is 2026-04-17. The arrival date MUST be before the departure date. If the arrival date is not in the future, notify the user that the arrival date is not in the future. | |
| filters | No | The filters that you want to apply to the search. Set true or false for each filter. Multiple filters can be selected. | |
| children | No | The number of children. | |
| latitude | Yes | The latitude of the search target location (e.g. a landmark or address), not the user's personal location. | |
| departure | Yes | The departure date in YYYY-MM-DD format, today is 2026-04-17. The departure date MUST be after the arrival date. If the departure date is not after the arrival date, notify the user that the departure date is not after the arrival date. | |
| longitude | Yes | The longitude of the search target location (e.g. a landmark or address), not the user's personal location. | |
| hotel_rating | No | The hotel rating that you want to filter by. Set true or false for each rating. Multiple ratings can be selected. | |
| children_ages | No | Dashed separated list of children ages, e.g. 10-12-14 | |
| review_rating | No | The guest review rating that you want to filter by. Set true or false for each rating. Multiple ratings can be selected. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds important coordinate interpretation context (target vs. user location) but fails to disclose safety profile (read-only vs. destructive), return format, pagination, or rate limiting that would help an agent understand operational constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear front-loading of purpose followed by usage guidance. The embedded JSON block with date information is slightly bulky but functionally necessary for date parameter context. No redundant or wasted sentences beyond the [trivago] namespace tag.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex tool with 12 parameters (including nested filter objects), no output schema, and zero annotations, the description provides adequate context by explaining the coordinate system and providing temporal reference data. It appropriately relies on the comprehensive schema for parameter details while clarifying the geospatial search semantics.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage establishing a baseline of 3. The description adds value by including the 'Additional Information' block with current date (2026-03-30), which provides essential temporal context for the required arrival/departure date parameters. It also reinforces coordinate semantics though this overlaps somewhat with schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (search for accommodations/hotels), method (using coordinates and radius), and platform (Trivago). It effectively distinguishes itself from sibling tool 'trivago_trivago-accommodation-search' by emphasizing the geospatial radius-based approach versus general search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use ('when you want to find accommodations near a specific location or street') and provides critical semantic guidance that coordinates refer to the search target/landmark, NOT the user's personal location. Lacks explicit mention of the sibling search tool as an alternative for non-coordinate searches.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
trivago_trivago-accommodation-searchAInspect
[trivago] Search for accommodations and hotels on Trivago.
Use this tool when you want to find accommodations in broader areas like cities, countries, etc.
If you are interested in a specific location, use the trivago-accommodation-radius-search tool.
Additional Information:
{
"knownInformation": {
"currentYear": "2026",
"today": "2026-04-17"
}
}
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | The ID of the location that you want to search for. | |
| ns | Yes | The NS of the location that you want to search for. | |
| rooms | No | The number of rooms, Number of rooms must be lower than or equal to the number of adults | |
| adults | No | The number of adults. | |
| arrival | Yes | The arrival date in YYYY-MM-DD format, today is 2026-04-17. The arrival date MUST be before the departure date. If the arrival date is not in the future, notify the user that the arrival date is not in the future. | |
| filters | No | The filters that you want to apply to the search. Set true or false for each filter. Multiple filters can be selected. | |
| children | No | The number of children. | |
| departure | Yes | The departure date in YYYY-MM-DD format, today is 2026-04-17. The departure date MUST be after the arrival date. If the departure date is not after the arrival date, notify the user that the departure date is not after the arrival date. | |
| hotel_rating | No | The hotel rating that you want to filter by. Set true or false for each rating. Multiple ratings can be selected. | |
| children_ages | No | Dashed separated list of children ages, e.g. 10-12-14 | |
| review_rating | No | The guest review rating that you want to filter by. Set true or false for each rating. Multiple ratings can be selected. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full disclosure burden. It adds temporal context via the 'knownInformation' block (current date 2026-03-30) relevant for date parameters, but lacks details on rate limits, pagination, caching behavior, or return value structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with purpose and usage guidelines. The JSON metadata block, while slightly awkwardly formatted, provides essential temporal context for date validation. No sentences are wasted, though the '[trivago]' prefix is redundant with the tool name.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complex schema (11 parameters, nested filter objects) and lack of output schema, the description adequately covers selection criteria but fails to describe what the tool returns (e.g., list of hotels, pricing, availability) or any result limits, leaving a significant gap for an agent invoking the tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds valuable semantic context by specifying the tool is for 'broader areas like cities, countries,' which helps interpret the 'id' and 'ns' location parameters beyond the schema's basic definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Search[es] for accommodations and hotels on Trivago' using a specific verb and resource. It distinguishes itself from the sibling 'trivago-accommodation-radius-search' by specifying it is for 'broader areas like cities, countries, etc.'
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use ('broader areas like cities, countries') and names the exact alternative tool ('trivago-accommodation-radius-search') for specific locations, providing clear decision criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
trivago_trivago-search-suggestionsCInspect
[trivago]
Suggestions are used to provide a list of possible search terms based on the user's query.
Query can be city, country.
You must pick output that are close to the user query.
Example:
Input:
Query: "Berlin"
Query: "Germany"| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | The query to search for suggestions. Query must be city, country. if you know geolocation, you can use radius search tool to find accommodations near the location. if query or the location is ambiguous, clarify the query or location by asking the user for more information. When user ask for a query, you must follow these steps. If each step is not successful, try the next step: 1. first try to use query as it is 2. MUST find the city of the query by using internet search, use MUST the city to search for suggestions 3. MUST find the country of the query by using internet search, use MUST the country to search for suggestions |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but offers minimal detail. It vaguely states 'You must pick output that are close to the user query' without explaining the matching algorithm, error conditions, or return format (critical given no output schema exists).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is relatively short but contains structural noise like the '[trivago]' prefix and passive phrasing ('are used to'). The sentence 'You must pick output that are close to the user query' is grammatically awkward and vague. However, it avoids excessive length and places examples at the end.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema and annotations, the description should explain what types of suggestions are returned (locations, hotels, landmarks?) and how they relate to the accommodation search workflow. It provides neither, leaving significant gaps in contextual understanding for an agent trying to complete a multi-step booking task.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds concrete examples ('Berlin', 'Germany') that illustrate expected input values, but does not add semantic meaning beyond what the detailed schema description already provides regarding the query parameter's purpose or format.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool provides 'a list of possible search terms based on the user's query' and specifies 'city, country' as valid query types. However, it fails to distinguish this suggestion/autocomplete tool from sibling tools like 'trivago_trivago-accommodation-search', leaving ambiguity about when to use suggestions versus direct accommodation searches.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
There is no guidance on when to use this tool versus alternatives. While the schema contains logic for handling ambiguous queries (using internet search), the description itself provides no workflow guidance, prerequisites, or comparisons to the other trivago accommodation search tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
visa_checkCInspect
Check visa requirement for a passport country visiting a destination.
| Name | Required | Description | Default |
|---|---|---|---|
| passport | Yes | ||
| destination | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. While 'Check' implies a read-only operation, the description fails to specify expected input formats, data freshness, rate limits, or whether the tool validates entry requirements for specific travel dates.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single-sentence description is front-loaded with the action verb and contains no redundant or wasteful language. Every word serves a purpose. However, given the complete lack of schema documentation, the description is arguably too brief to be appropriately sized for the complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While an output schema exists (reducing the need to describe return values), the description is incomplete due to critical gaps in input specification. For a visa checking tool, the distinction between 'US', 'USA', and 'United States' is vital, and the absence of format guidance alongside 0% schema coverage leaves the tool under-specified.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description must compensate but only minimally does so. It identifies that 'passport' refers to a country and 'destination' is a location, but critically omits expected formats (ISO 3166-1 alpha-2 codes, IATA codes, or full country names), which is essential for correct invocation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Check') and identifies the resources ('visa requirement', 'passport country', 'destination'), making the core function clear. However, it fails to distinguish from the sibling tool 'visa_summary', leaving ambiguity about which visa tool to use when.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus 'visa_summary' or other alternatives. No mention of prerequisites like required country code formats (ISO vs. full names) or when visa checks are unnecessary (e.g., domestic travel).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
visa_summaryBInspect
Overview of visa-free access for a passport country — counts by category.
| Name | Required | Description | Default |
|---|---|---|---|
| passport | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It adds value by indicating the output structure ('counts by category'), suggesting categorized aggregate data. However, it omits details about data freshness, caching, or whether the counts include visa-on-arrival vs visa-free distinctions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with no redundant words. It front-loads the core purpose (overview of visa-free access) and qualifies it (counts by category). However, given the lack of schema descriptions and annotations, this brevity may be insufficient for correct invocation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with an output schema, the description adequately covers the core function. However, it falls short of complete guidance due to ambiguous parameter format requirements and failure to distinguish usage from 'visa_check,' which could lead to incorrect tool selection by an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 0% description coverage (parameter title only). The description adds semantic meaning by referring to 'passport country,' clarifying that the input represents the issuing country. However, it fails to specify the expected format (e.g., ISO 3166-1 alpha-3 code vs. full country name), which is critical given the complete lack of schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it provides an 'Overview of visa-free access' with 'counts by category,' specifying the resource and aggregation type. However, it lacks explicit differentiation from the sibling 'visa_check' tool, which likely handles specific visa requirement queries rather than aggregate counts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus the sibling 'visa_check' tool. It does not specify use cases (e.g., 'use for aggregate statistics' vs 'use for specific destination checks') or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!