Skip to main content
Glama
Ownership verified

Server Details

Aggregated travel MCP — flights, tours, activities, price checks, visas, and more.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
navifare/moltravel-mcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.6/5 across 56 of 56 tools scored. Lowest: 2.2/5.

Server CoherenceC
Disambiguation3/5

Tools are grouped by providers (e.g., skiplagged, tourradar) with clear domains, reducing confusion within groups. However, there is overlap across providers (e.g., multiple flight search tools like kiwi_search-flight and skiplagged_sk_flights_search) and some tools like travel_agent and data_status have broad or ambiguous purposes that could lead to misselection.

Naming Consistency2/5

Naming is highly inconsistent, mixing snake_case (airlines_lookup), kebab-case (tourradar_b2b-cities-search), and provider prefixes with varying delimiters. There is no uniform verb_noun pattern; some tools use verbs like 'lookup' or 'search', while others are descriptive phrases like 'format_flight_pricecheck_request'. This lack of convention makes the set harder to navigate.

Tool Count2/5

With 56 tools, the count is excessive for a single server, indicating poor scoping. While travel is a broad domain, the server aggregates multiple third-party APIs (e.g., Kiwi, Skiplagged, Tourradar) without a cohesive focus, leading to bloat. A more modular approach with separate servers would be more appropriate.

Completeness4/5

The tool set covers a wide range of travel-related functions, including flights, accommodations, tours, visas, and ferries, with good depth in areas like tour searches and flight comparisons. Minor gaps exist, such as limited car rental or travel insurance tools, but agents can work around these given the comprehensive coverage of core travel planning needs.

Available Tools

56 tools
airlines_lookupBInspect

Look up an airline by IATA (2-char) or ICAO (3-char) code.

ParametersJSON Schema
NameRequiredDescriptionDefault
codeYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but only specifies input format expectations (2-char vs 3-char). It fails to mention error handling (e.g., invalid codes), data freshness, or what constitutes a successful match.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Perfectly concise at 9 words. Every element earns its place: the verb 'Look up', the resource 'airline', and the critical format specifications '(2-char)' and '(3-char)'. No waste, front-loaded action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a single-parameter lookup tool with an output schema (which handles return value documentation). However, with zero schema coverage and no annotations, the description should ideally address validation behavior or error states for invalid codes.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Excellent compensation for 0% schema description coverage. The description crucially clarifies that the 'code' parameter accepts either IATA (2-character) or ICAO (3-character) formats, providing essential semantic meaning entirely absent from the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Look up') and resource ('airline') with specific identifier types (IATA/ICAO codes). It implicitly distinguishes from sibling 'airlines_search' by specifying code-based lookup versus likely text-based search, though it doesn't explicitly name the alternative.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context (when you have an IATA or ICAO code) but provides no explicit when-to-use guidance, no exclusions, and doesn't mention the sibling 'airlines_search' as an alternative for name-based searches.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

airports_lookupAInspect

Look up an airport by IATA (3-char) or ICAO (4-char) code. Returns full details including runways.

ParametersJSON Schema
NameRequiredDescriptionDefault
codeYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses return value characteristics ('full details including runways'), indicating rich data retrieval. However, it omits mention of error behavior for invalid codes, rate limits, or authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. First sentence covers purpose and input semantics; second covers output characteristics. Information density is optimal with no redundant or filler text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriate for a single-parameter lookup tool where an output schema exists. The description adds value by highlighting runway data in the returns, suggesting comprehensive airport records. Could be improved by noting error states for invalid codes.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage for the 'code' parameter. The description effectively compensates by specifying expected formats (IATA 3-char, ICAO 4-char) and validation types, though it doesn't explicitly map these constraints to the parameter name.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool looks up airports by specific code types (IATA/ICAO). Distinguishes from airline-related siblings implicitly by specifying 'airport' and from 'airports_search' via the code-based lookup mechanism, though it doesn't explicitly name sibling alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides excellent input format guidance (3-char IATA vs 4-char ICAO) within the description. However, lacks explicit guidance on when to use this versus 'airports_search' or 'airports_near' for different lookup scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

airports_nearCInspect

Find airports near a geographic point. Returns results sorted by distance.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
latitudeYes
longitudeYes
radius_kmNo
include_smallNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully notes that results are 'sorted by distance,' which is valuable behavioral context, but fails to mention other critical traits like whether small airports are excluded by default, rate limits, or the read-only nature of the operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with two efficient sentences and zero redundancy. However, given the lack of schema descriptions and annotations, this brevity results in underspecification rather than clarity, preventing a perfect score.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with five parameters and zero schema descriptions, the description is insufficiently complete. It fails to explain the boolean 'include_small' parameter, default values for radius_km and limit, or the output format (despite an output schema existing that could have been referenced descriptively).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Given 0% schema description coverage, the description inadequately compensates for undocumented parameters. While 'geographic point' implies the purpose of latitude and longitude, and 'distance' hints at radius_km, it completely omits semantics for 'include_small' (what defines 'small'?) and 'limit' (max results?), leaving three of five parameters effectively undocumented.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Find[s] airports near a geographic point' with a specific verb and resource, and the phrase 'near a geographic point' effectively distinguishes it from sibling tools like airports_lookup and airports_search. However, it misses a perfect score by not explicitly contrasting with these siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no explicit guidance on when to use this tool versus alternatives like airports_search or airports_lookup. While 'geographic point' implies coordinate-based usage, there is no 'when to use/when not to use' instruction or mention of prerequisites like requiring latitude/longitude values.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

data_statusAInspect

Check which static datasets (airports, airlines, visas) are loaded and their row counts.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses behavioral traits by specifying the tool returns 'row counts' and checks 'loaded' status. However, it lacks safety profile disclosure (read-only nature, performance characteristics, or auth requirements) that would be necessary given the absence of annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with action verb 'Check'. Zero redundant words. Specific parenthetical enumeration of datasets adds precision without verbosity. Every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter diagnostic tool with an output schema present, the description is complete. It specifies exactly which datasets are inspected (aligning with sibling tool domains) and what metrics are returned (row counts), making it sufficient for agent selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 0 parameters. Per rubric, 0 params = baseline 4. Description appropriately makes no mention of parameters since none exist.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent clarity. Uses specific verb 'Check' with explicit resource 'static datasets' and enumerates the exact datasets monitored (airports, airlines, visas). Clearly distinguishes this meta/diagnostic tool from operational siblings like airlines_lookup or visa_check.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use or when-not-to-use guidance provided. However, the diagnostic nature (checking loaded status and row counts) implies usage for data health verification before querying, but lacks explicit guidance like 'use before querying datasets to verify availability'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fcdo_list_countriesBInspect

[fcdo] List all countries with UK FCDO travel advice.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. While 'List' implies a read-only operation, the description omits safety details (e.g., idempotency), rate limits, or data freshness. The scope 'all countries' is stated, but pagination behavior or response size expectations are not addressed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of a single, efficient sentence with no redundant information. The '[fcdo]' prefix serves as a namespace identifier without cluttering the semantic content, and every word directly contributes to understanding the tool's scope.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that an output schema exists (covering return values) and the input schema is empty, the description adequately covers the tool's purpose for its low complexity. However, lacking annotations, it could have benefited from a brief note confirming this is a safe, non-destructive read operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters, establishing a baseline score of 4. The description appropriately requires no parameter explanation, as the tool is a simple, unfiltered list operation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('countries with UK FCDO travel advice'), making the tool's function immediately apparent. However, it does not explicitly differentiate from its sibling 'fcdo_travel_advice' (which presumably retrieves detailed advice for specific countries), leaving the relationship between the tools implied rather than stated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, nor does it mention the sibling 'fcdo_travel_advice' as a follow-up for retrieving specific country details. There are no stated prerequisites or exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fcdo_travel_adviceBInspect

[fcdo] Get UK FCDO travel advice for a specific country. Includes safety, entry requirements, health, and warnings.

ParametersJSON Schema
NameRequiredDescriptionDefault
countryYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It successfully discloses content categories covered (safety, entry requirements, health, warnings), giving agents insight into return data structure. However, it lacks operational details like read-only nature, update frequency of FCDO data, or whether the advice is official/legal guidance.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences with zero waste. Front-loaded with the core action '[fcdo] Get UK FCDO travel advice', followed immediately by value-add content categories. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriate for a single-parameter lookup tool with an output schema (which handles return value documentation). Covers the domain (FCDO), scope (country-specific), and content types. Only gap is parameter format specification, which is noted in parameter semantics.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0% with no parameter descriptions. The description mentions 'for a specific country' but fails to specify expected format (ISO 3166-1 alpha-2 code, full country name, etc.) or case sensitivity. With zero schema coverage, the description must compensate for this gap but doesn't.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb-resource pair ('Get UK FCDO travel advice') and distinguishes from sibling fcdo_list_countries by targeting a specific country. However, it doesn't explicitly clarify this advice is specifically intended for UK citizens/travelers, which would strengthen differentiation from general country info tools like restcountries_country_info.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use guidance or alternative suggestions. The description mentions 'entry requirements' which overlaps functionally with visa_check and visa_summary siblings, but provides no guidance on choosing between them. No mention of prerequisite steps (e.g., using fcdo_list_countries to validate country availability).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ferryhopper_get_direct_connections_for_portsCInspect

[ferryhopper] Get a list of all the direct connections between ports

ParametersJSON Schema
NameRequiredDescriptionDefault
portLocationYesLocation name or search term used to find matching ports (not limited to exact port codes).
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure but offers almost none. It does not clarify what constitutes a 'direct connection' (routes? lines? schedules?), whether the operation is read-only, or what the return structure contains.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single efficient sentence. The '[ferryhopper]' prefix adds noise but minimal overhead. It is appropriately front-loaded with the action verb.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter lookup tool with complete schema coverage and no output schema, the description is minimally adequate. However, it lacks domain context explaining that 'connections' represent reachable destination ports from the queried location.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for its single parameter ('portLocation'), establishing a baseline of 3. The description adds no parameter-specific context, but given the schema completeness, no compensation is required.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves 'direct connections between ports' with a specific verb ('Get') and resource. However, it fails to distinguish from sibling tool 'ferryhopper_search_trips'—leaving ambiguity about whether this returns route metadata versus searchable trip options.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus 'ferryhopper_search_trips' or 'ferryhopper_get_ports'. There is no mention of prerequisites (e.g., whether the port must exist) or when to prefer direct connections over trip searches.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ferryhopper_get_portsBInspect

[ferryhopper] Get a list of global ports and their details

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full disclosure burden. It mentions 'global' scope (useful) but fails to clarify pagination behavior, rate limits, what specific 'details' are returned, or whether the data is cached. For a data-retrieval tool with no output schema, this lacks necessary behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with the namespace prefix followed by the action. It is appropriately brief for a simple catalog tool, though the extreme brevity leaves room for additional context about output format without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, the description should compensate by explaining what port 'details' include (IDs, names, locations) and noting this is a prerequisite for other ferryhopper tools. It meets minimum viability but leaves significant gaps for an integration tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With zero parameters, the baseline score applies. The description appropriately indicates this is a parameterless list operation requiring no filters, which aligns with the empty input schema. No additional parameter guidance is needed or possible.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and resource ('list of global ports and their details'), distinguishing it from sibling ferryhopper tools like 'ferryhopper_search_trips' and 'ferryhopper_get_direct_connections_for_ports' by being the catalog/listing operation. The '[ferryhopper]' prefix helps namespace it among airline/airport siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description implies this is a lookup tool (necessary for obtaining port identifiers to use with 'ferryhopper_search_trips'), it lacks explicit guidance on when to call this versus the connection-specific tool or how it relates to the trip search workflow. Usage must be inferred from sibling tool names.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ferryhopper_search_tripsCInspect

[ferryhopper] Get a list available ferry trips between two ports on a specific date

ParametersJSON Schema
NameRequiredDescriptionDefault
dateYesDeparture date in ISO format YYYY-MM-DD (e.g. 2026-03-15).
arrivalLocationYesArrival location as a human-readable name or search term (e.g. city, port name), not a port code.
departureLocationYesDeparture location as a human-readable name or search term (e.g. city, port name), not a port code.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full disclosure burden. While 'Get a list' implies a read-only search operation, the description fails to specify what data is returned (prices, schedules, operators, availability), whether pagination is supported, or any rate limiting concerns.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, appropriately front-loaded with the action verb 'Get'. Minor grammatical error ('list available' instead of 'list of available') prevents a perfect score, but overall efficient with minimal waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given three simple parameters with complete schema coverage and no nested objects, the input requirements are adequately addressed. However, lacking both annotations and an output schema, the description should ideally characterize the return structure (e.g., trip options with times/prices) to be fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed descriptions for each parameter (ISO date format, human-readable location names). The description mentions 'between two ports' and 'specific date' which reinforces the schema semantics but adds no additional guidance on parameter interactions or validation rules beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves ferry trips (specific resource) between two ports on a specific date, distinguishing it from flight search siblings (kiwi_search-flight, skiplagged_sk_flights_search) and other ferryhopper tools like get_ports or get_direct_connections_for_ports. The '[ferryhopper]' prefix aids identification but slightly dilutes the sentence structure.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this versus alternatives. It does not clarify when to use this date-specific search versus 'ferryhopper_get_direct_connections_for_ports' (which finds general routes without dates) or whether users should validate ports first using 'ferryhopper_get_ports'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

kiwi_feedback-to-devsCInspect

[kiwi] Send feedback to the dev of the Kiwi MCP server.

ParametersJSON Schema
NameRequiredDescriptionDefault
textYesThe content of the feedback. Don't hesitate to include any text relevant to the issue (logs, error message) if you are having one.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It fails to explain what happens to the feedback after submission (e.g., creates a ticket, sends email, stored in database), whether the operation is idempotent, or if there are rate limits. The description only states the intent, not the mechanism or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately brief at one sentence. However, the '[kiwi]' prefix at the beginning appears to be categorization metadata leaking into the description field rather than essential descriptive content, slightly reducing the structural quality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (single string parameter, no output schema), the description is minimally adequate. However, for a feedback submission tool with no annotations, it should ideally disclose what confirmation or response the user can expect, or whether the feedback is anonymous.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input parameter 'text' is fully documented in the schema itself, including guidance on including logs and error messages. The tool description does not mention parameters, which is acceptable given the schema completeness. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (send feedback) and target (dev of the Kiwi MCP server), effectively distinguishing it from the numerous travel-related sibling tools (airlines, hotels, tours, etc.). However, the '[kiwi]' prefix appears to be metadata leakage rather than descriptive content, and it doesn't specify what types of feedback are appropriate (bugs vs. feature requests).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, or under what circumstances (e.g., encountering bugs, errors, or feature requests). While there are no direct sibling alternatives for feedback submission, the description fails to indicate appropriate triggers for invocation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

kiwi_search-flightAInspect

[kiwi]

Search for a flight

Description

Uses the Kiwi API to search for available flights between two locations on a specific date.

How it works

The tool will:

  1. Search for matching locations to resolve airport codes

  2. Find available flights for the specified route and date range

Method

Call this tool whenever a user wants to search for flights, regardless of whether they provided exact airport codes or just city names.

You should display the returned results in a markdown table format: Group the results by price (those who are the cheapest), duration (those who are the shortest, i.e. have the smallest 'totalDurationInSeconds') and the rest (those that could still be interesting).

Always display for each flight in order:

  • In the 1st column: The departure and arrival airports, including layovers (e.g. "Paris CDG → Barcelona BCN → Lisbon LIS")

  • In the 2nd column: The departure and arrival dates & times in the local timezones, and duration of the flight (e.g. "03/08 06:05 → 09:30 (3h 25m)", use 'durationInSeconds' to display the duration and not 'totalDurationInSeconds')

  • In the 3rd column: The cabin class (e.g. "Economy")

  • (In case of return flight only) In the 4th column: The return flight departure and arrival airports, including layovers (e.g. "Paris CDG → Barcelona BCN → Lisbon LIS")

  • (In case of return flight only) In the 5th column: The return flight departure and arrival dates & times in the local timezones, and duration of the flight (e.g. "03/08 06:05 → 09:30 (3h 25m)", use 'return.durationInSeconds' to display the duration)

  • (In case of return flight only) In the 6th column: The return flight cabin class (e.g. "Economy")

  • In the previous-to-last column: The total price of the flight

  • In the last column: The deep link to book the flight

Finally, provide a summary highlighting the best prices, the shortest flights and a recommendation. End wishing a nice trip to the user with a short fun fact about the destination!

ParametersJSON Schema
NameRequiredDescriptionDefault
currNoCurrency for response (examples: EUR, USD, GBP, JPY, CAD, AUD, NZD, CHF etc.)EUR
sortNoSort results by: price, duration, quality or date (default: date)date
flyToYesLocation to fly to: It could be a city or an airport name or code
localeNoLanguage of city names and kiwi.com website links (examples: en, uk, de, fr, es, it, ru etc.)en
flyFromYesLocation to fly from: It could be a city or an airport name or code
cabinClassNoCabin class: M (economy), W (economy premium), C (business), F (first class)
passengersNoPassengers details. The total number of passengers must be between 1 and 9. There must be at least one adult. There must be at least one adult per infant.
returnDateNoReturn date in dd/mm/yyyy format
departureDateYesDeparture date in dd/mm/yyyy format
returnDateFlexRangeNoReturn date flexibility range in days (0 to 3 days before/after the selected return date)
departureDateFlexRangeNoDeparture date flexibility range in days (0 to 3 days before/after the selected departure date)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses that the tool performs location resolution ('Search for matching locations to resolve airport codes') and searches date ranges. However, it omits safety profile (read-only vs destructive), rate limits, error behaviors, or authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Severely bloated with extensive output formatting instructions (markdown table specifications, column ordering, fun facts) that belong in system prompts rather than tool descriptions. While front-loaded with purpose, the majority of text distracts from tool selection and invocation logic.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, the description comprehensively documents the return data structure through detailed display instructions (mentioning fields like durationInSeconds, totalDurationInSeconds, deep links). Missing only error handling and edge case behaviors for full completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description mentions accepting city names or airport codes, but this merely restates the schema descriptions for flyFrom/flyTo ('It could be a city or an airport name or code'), adding no new semantic value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool 'searches for available flights between two locations' using the Kiwi API. However, it does not distinguish from sibling flight search tools like 'skiplagged_sk_flights_search', leaving ambiguity about which flight provider to use.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use guidance: 'Call this tool whenever a user wants to search for flights, regardless of whether they provided exact airport codes or just city names.' Lacks exclusions or explicit comparison to alternative flight search siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

peek_experience_availabilityBInspect

[peek] Get availability information for a specific experience including dates, times, and pricing

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesThe ID of the experience
endDateYesEnd date inclusive in YYYY-MM-DD format (e.g., '2025-06-20' would return things taking place ON or BEFORE the 20th)
quantityYesNumber of travelers
startDateYesStart date inclusive YYYY-MM-DD format (e.g., '2025-06-19')
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. While it lists what data is returned (dates, times, pricing), it fails to indicate if this is a safe read-only operation, whether it requires specific permissions, rate limits, or error handling behavior (e.g., what happens if the ID is invalid).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no redundant words. The '[peek]' prefix appears to be a provider tag, and the remainder immediately conveys the action and return value without waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description partially compensates by listing the types of data returned (dates, times, pricing), but does not describe the response structure or format. With complete input schema coverage but no annotations, the description meets minimum viability but lacks operational context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents all four parameters (id, quantity, startDate, endDate). The description implies the 'id' and date parameters by referencing 'specific experience' and 'dates', but adds no syntax or semantic details beyond the schema, warranting the baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Get[s] availability information for a specific experience' with specific outputs (dates, times, pricing). However, it does not explicitly differentiate from sibling 'peek_experience_details', which likely retrieves general metadata rather than availability.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'peek_experience_details' or 'peek_search_experiences'. It does not mention prerequisites (e.g., needing an experience ID from search results) or exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

peek_experience_detailsCInspect

[peek] Get detailed information about a specific experience by ID

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesThe ID of the experience to retrieve
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure, but offers almost none. It does not confirm the operation is read-only (though 'Get' implies it), describe rate limits, caching behavior, or—critically—what 'detailed information' includes (description, photos, pricing, location) given the absence of an output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately brief and front-loaded with the verb. The '[peek]' prefix appears to be metadata or branding leakage that adds no semantic value, but the single sentence efficiently conveys the core operation without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter lookup tool, the description is minimally viable. However, given the lack of an output schema and the domain-specific term 'experience' (referring to Peek's tour/activity inventory), the description should ideally sketch what details are returned to help the agent assess utility.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description mentions 'by ID', reinforcing the parameter's purpose, but adds no further semantics regarding ID format, where to obtain valid IDs, or validation rules beyond what the schema already states.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Get[s] detailed information about a specific experience by ID', providing a specific verb, resource, and scope (by ID). It implicitly distinguishes from sibling 'peek_search_experiences' (search vs. lookup by ID), though it could more explicitly differentiate from 'peek_experience_availability'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'peek_search_experiences' (which should be used first to obtain an ID) or 'peek_experience_availability' (for scheduling data). There are no prerequisites or workflow hints.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

peek_list_tagsBInspect

[peek] List all category tags

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, yet the description fails to disclose behavioral traits such as whether results are cached, pagination limits, rate limiting, or what data structure is returned (strings vs objects). For a zero-parameter tool, this metadata is essential since the schema conveys no information.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately brief for a simple list operation. However, the '[peek]' prefix is semantically wasteful—it appears to be metadata or branding that belongs in the title field (which is null) rather than the functional description.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While the tool is simple (no input params), the lack of output schema means the description should indicate the return format (e.g., array of tag names/IDs). Given no annotations and no output schema, the description is adequate but minimal—missing context on what constitutes a 'category tag' in this domain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has zero parameters. According to evaluation rules, zero parameters establishes a baseline score of 4. The description doesn't need to compensate for missing schema documentation since there are no parameters to document.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states a specific verb ('List') and resource ('category tags'), and the tool name 'peek_list_tags' combined with sibling tools (peek_search_experiences, etc.) clarifies this relates to Peek activity categories. However, it doesn't explicitly state these are experience/activity category tags versus other taxonomies.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this versus the other peek_* tools like peek_search_experiences. Doesn't indicate if this is a prerequisite step for category discovery before searching, or merely a reference utility.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

peek_render_activity_tilesAInspect

[peek] Render activity tiles for a list of activity IDs, returning an embeddable widget URI

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesID or comma separate list of activity IDs to render as tiles
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and successfully discloses the critical behavioral trait that it returns a URI/reference (embeddable widget) rather than raw activity data or rendered HTML. However, it omits cache behavior, auth requirements, or error handling for invalid IDs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with the action verb. Every clause adds value. Minor deduction for the '[peek]' prefix which serves as branding/context but slightly disrupts the flow without adding functional meaning.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description appropriately compensates by specifying the return value type ('embeddable widget URI'). For a single-parameter tool with high schema coverage, the description provides sufficient context for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the baseline is met. The description reinforces that the IDs represent 'activity tiles' but does not add syntax details (like comma-separation) beyond what the schema already provides. It confirms the logical input is a list.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Render') with clear resource ('activity tiles') and explicitly distinguishes from sibling peek tools by stating the unique output format ('embeddable widget URI'). The '[peek]' prefix also signals the platform context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description implies this is for embedding/visual display purposes via 'embeddable widget URI', it lacks explicit when-to-use guidance contrasting it with siblings like peek_experience_details or peek_search_experiences. No alternatives or exclusions are named.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

peek_search_experiencesBInspect

[peek] Search for travel experiences with comprehensive filtering options. Returns available categories, tags, and regions with IDs for further filtering.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryNoWhen the user wants something w/ a specific keyword (bike, beer, art, etc) limit to experiences whose title contain a keyword. Never include location information.
tagIdNoWhen you have determined the user is interest in a specific vibe of activity (family friendly, romantic, etc) limit to only experiences with a specific tag (single tag ID)
latLngNoWhen the user wants something NEAR a specific place, but not necessarily IN a specific place, limit to only those near a given lat_lng. ex: "37.7799,-122.2822". Don't use this for regions, instead use the search_regions and provide a region id. this is a good fallback if a specific region is lacking inventory.
endDateNoReturn experiences that are available on or before this date in YYYY-MM-DD format (e.g., '2025-06-20' would return things taking place ON or BEFORE the 20th)
regionIdNoWhen you have determined the user wants something in a specific region (found w/ search_regions) limit to only a specific region ID
startDateNoReturn experiences that are available on or after this date. YYYY-MM-DD format (e.g., '2025-06-19')
categoryIdNoLimit to only a specific activity category
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It partially compensates by disclosing what the return payload contains ('categories, tags, and regions with IDs'), which is valuable given the lack of an output schema. However, it fails to disclose safety characteristics (read-only vs. destructive), rate limits, or pagination behavior expected for a search tool with 7 optional parameters.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately brief at two sentences. The critical information (search action and return structure) is front-loaded, though the '[peek]' vendor prefix in the first sentence is structural noise that doesn't aid the LLM. No redundant or wasted sentences beyond that prefix.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 7 parameters with 100% schema coverage, the description doesn't need to elaborate on inputs. With no output schema, it adequately compensates by describing the return values (categories, tags, regions with IDs). However, for a complex search tool with no annotations, it should disclose the read-only/safe nature of the operation and any pagination logic, which are absent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description references 'comprehensive filtering options,' which collectively acknowledges the parameter purpose, but adds no specific semantic detail about individual parameters (e.g., date formats, coordinate precision) beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the core action ('Search for travel experiences') and resource type. The '[peek]' prefix is noise, but the first sentence establishes the verb and object clearly. It partially distinguishes from siblings like peek_search_regions by noting it returns IDs for 'further filtering,' implying this is a discovery tool, though it could more explicitly contrast with peek_experience_details.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The phrase 'for further filtering' implies this is an initial discovery step before using other tools (like peek_experience_details), providing implicit workflow guidance. However, it lacks explicit 'when to use' guidance, prerequisites, or named alternatives. It does not clarify why one should use this versus peek_list_tags or peek_search_regions directly.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

peek_search_regionsCInspect

[peek] Search for regions by name

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of regions to return (default: 50)
queryYesSearch query to match against region names
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It fails to indicate whether results are paginated, what fields are returned (IDs, coordinates, names?), rate limits, or authentication requirements. The '[peek]' tag hints at the service provider but provides no operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise at six words with no filler. However, given the lack of annotations and output schema, this brevity may constitute under-specification rather than efficient communication. The '[peek]' prefix at the start is useful categorization.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Inadequate for a search tool with no output schema. The description should explain what data is returned (region IDs needed for downstream 'peek_experience_*' calls?) and how results relate to the broader Peek tool ecosystem. Without annotations or output schema, the description must compensate more.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema adequately documents both parameters. The description adds minimal semantic value beyond the schema, merely restating that the tool searches by name. Baseline score applies since schema descriptions are complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb (search) and resource (regions) with the '[peek]' prefix distinguishing it from non-Peek siblings. However, it lacks specificity about what constitutes a 'region' in the Peek context (e.g., geographic areas vs. administrative boundaries) which would help differentiate it from 'peek_search_experiences'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives like 'peek_search_experiences' or 'airports_search'. Given the sibling ecosystem, explicit guidance on whether regions are prerequisites for experience searches would be valuable.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

restcountries_country_infoCInspect

[restcountries] Look up country information from REST Countries.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYes
fieldsNo
search_byNoname

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It mentions the external API (REST Countries) but discloses no behavioral traits such as rate limits, caching behavior, what happens when a country is not found, or what specific data fields are returned (though an output schema exists).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence is appropriately concise and front-loaded with the essential purpose. However, given the complete lack of parameter documentation in the schema, the description is inappropriately brief—conciseness becomes under-specification when critical usage details are omitted.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Inadequate for the tool's complexity. With 3 parameters, 0% schema coverage, no annotations, and a non-obvious query syntax (REST Countries API supports multiple search modes), the description should explain parameters and search behavior. It provides only the minimal purpose statement.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description completely fails to compensate. It does not explain the required 'query' parameter (expected format: full name, ISO code?), the 'fields' parameter (filtering syntax?), or valid values for 'search_by' (only 'name' or others?). This is a critical gap for tool invocation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

States the basic action (look up) and resource (country information) and identifies the data source (REST Countries), which provides some specificity. However, it fails to distinguish what type of country information this provides (demographics, geography, flags) versus sibling tools like fcdo_travel_advice or tourradar country lists.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives like fcdo_list_countries for travel advice or tourradar for tour destinations. No mention of prerequisites, search syntax requirements, or exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

skiplagged_sk_destinations_anywhereBInspect

[skiplagged] Find cheapest destinations from a departure city when flexible about where to go. Perfect for discovering travel opportunities and deals.

ParametersJSON Schema
NameRequiredDescriptionDefault
fromYesDeparture IATA code or city name
adultsNoNumber of adult passengers
departYesDeparture date in YYYY-MM-DD format
returnNoReturn date in YYYY-MM-DD format (optional for one-way trips)
childrenNoNumber of child passengers
fare_classNoFare class preferenceeconomy
infantsLapNoNumber of lap infants
renderModeNoPreferred render mode for tool output (ui or text).
infantsSeatNoNumber of seat infants
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. However, it fails to explain what data is returned (destination list format, price inclusion), rate limits, caching behavior, or the significance of the 'skiplagged' provider (hidden-city ticketing implications).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The two-sentence structure is appropriately front-loaded with the core function. The '[skiplagged]' prefix is slightly noisy but acceptable for provider identification in a multi-tool environment. No significant waste, though it could be more information-dense given the complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex search tool with 9 parameters and no output schema or annotations, the description is inadequate. It fails to describe the return structure (what does 'destinations' mean—airports, cities, prices?), pagination, or how results are sorted/ranked.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description adds minimal value beyond the schema by implying the flexible-destination nature (explaining the absence of a 'to' parameter), but doesn't elaborate on date formats, passenger combinations, or the renderMode parameter's practical impact.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action (find cheapest destinations), the resource (destinations from a departure city), and the distinctive scope (when flexible about where to go). This effectively differentiates it from sibling tools like skiplagged_sk_flights_search which likely require a specific destination.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use the tool ('when flexible about where to go'), but lacks explicit guidance on when NOT to use it or named alternatives. It doesn't direct users to skiplagged_sk_flights_search for specific routes, leaving the distinction implied rather than explicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

skiplagged_sk_flex_departure_calendarAInspect

[skiplagged] Generate a flexible calendar of the lowest one-way fares around a chosen departure, returning date → cheapest-price entries to help pick the best day to fly. Intended for flexible-date price discovery, not exact itinerary selection.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoSort results chronologically or by lowest price firstdate
adultsNoNumber of adult passengers (default: 1, max: 9)
originYesDeparture city name or IATA code (will be resolved to IATA code automatically)
childrenNoNumber of child passengers (default: 0, max: 8)
infantsLapNoNumber of lap infants (default: 0, max: 4)
renderModeNoPreferred render mode for tool output (ui or text).
returnDateNoReturn date for matching trip length (optional, YYYY-MM-DD format)
destinationYesArrival city name or IATA code (will be resolved to IATA code automatically)
infantsSeatNoNumber of seat infants (default: 0, max: 4)
departureDateYesPreferred departure date (YYYY-MM-DD format)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses return format ('date → cheapest-price entries') and scope limitation, but lacks details on search radius (how many days 'around'?), rate limits, caching behavior, or currency/tax inclusion that would help an agent understand operational constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. Front-loaded with vendor prefix. First sentence establishes function and output format; second sentence clarifies intent and boundaries. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given rich schema (100% coverage) and conceptual explanation of return values ('date → cheapest-price entries'), the description adequately covers the tool's purpose. Minor gap: doesn't specify the temporal range of the 'around' search or output formatting details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, baseline is 3. Description adds value by contextualizing departureDate as 'chosen departure' (center of flexible range) rather than exact flight date, and implying one-way focus that clarifies the optional nature of returnDate in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb ('Generate') and resource ('flexible calendar') clearly stated. Distinguishes from siblings by specifying 'one-way fares' vs return calendar, and explicitly contrasting with 'exact itinerary selection' to differentiate from skiplagged_sk_flights_search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use guidance ('flexible-date price discovery') and when-not-to-use ('not exact itinerary selection'). Would be perfect if it named the specific alternative tool (skiplagged_sk_flights_search) rather than just the category.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

skiplagged_sk_flex_return_calendarAInspect

[skiplagged] Generate a flexible round-trip price calendar for a fixed-length stay around a chosen travel window. Returns (depart date, return date, lowest total price) entries for nearby date pairs that preserve the original trip length; intended for price discovery, not exact itinerary selection.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoSort results chronologically or by lowest price firstdate
adultsNoNumber of adult passengers (default: 1, max: 9)
originYesDeparture city name or IATA code (will be resolved to IATA code automatically)
childrenNoNumber of child passengers (default: 0, max: 8)
infantsLapNoNumber of lap infants (default: 0, max: 4)
renderModeNoPreferred render mode for tool output (ui or text).
returnDateYesReturn date for the reference trip (YYYY-MM-DD format)
destinationYesArrival city name or IATA code (will be resolved to IATA code automatically)
infantsSeatNoNumber of seat infants (default: 0, max: 4)
departureDateYesPreferred departure date (YYYY-MM-DD format)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description must carry the full burden. It successfully discloses the return structure (depart date, return date, price tuples) and the core logic (preserving trip length while varying dates). However, it lacks operational details such as search radius ('nearby' is undefined), rate limits, or caching behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two dense sentences with zero waste. First sentence establishes the function and constraints; second sentence describes the return format and intended use. Every clause provides essential information for tool selection.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description adequately compensates by detailing the return tuple structure. With 100% schema coverage and clear explanation of the flex algorithm, it covers the essential domain context. Minor gap: does not specify the temporal range of 'nearby' dates.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While the schema has 100% description coverage (baseline 3), the description adds valuable semantic context about the relationship between departureDate and returnDate—that they define a fixed duration that is maintained across alternative date pairs. This explains the 'flex' logic beyond what the schema fields convey individually.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Generate') and clearly identifies the resource ('flexible round-trip price calendar'). It distinguishes itself from sibling tools by emphasizing 'fixed-length stay' and preserving 'original trip length,' implying the difference from skiplagged_sk_flex_departure_calendar and exact search tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states the intended use case ('price discovery') and the non-intended use ('not exact itinerary selection'), providing clear behavioral guidance. However, it does not explicitly name which sibling tool (e.g., skiplagged_sk_flights_search) to use for actual itinerary selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

skiplagged_sk_hotel_detailsAInspect

[skiplagged] Fetch room-level availability, pricing, and amenities for a specific hotel and stay dates.

ParametersJSON Schema
NameRequiredDescriptionDefault
liveNoUse live rates (includes booking links). Disable for faster, less accurate cached rates.
checkinYesCheck-in date (YYYY-MM-DD format)
hotelIdYesHotel ID (from search results) to fetch detailed availability for
checkoutYesCheck-out date (YYYY-MM-DD format)
numRoomsNoNumber of rooms (default: 1, max: 5)
numAdultsNoNumber of adults (default: 2, max: 10)
renderModeNoPreferred render mode for tool output (ui or text).
numChildrenNoNumber of children (default: 0, max: 10)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses data granularity ('room-level') and return data types, but omits operational details like rate limits, caching behavior, or failure modes that would help the agent predict tool behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with zero waste. Every phrase earns its place: '[skiplagged]' identifies provider, 'room-level' specifies granularity, and 'specific hotel' clarifies scope. Front-loaded and appropriately sized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, but the description compensates by listing return data categories (availability, pricing, amenities). With 100% input schema coverage, the description provides adequate context for an 8-parameter tool, though behavioral caveats would strengthen it further.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so parameters are fully self-documenting. The description references key concepts (stay dates, specific hotel) that map to parameters but adds no technical syntax, validation rules, or semantic relationships beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Uses specific verb 'Fetch' with clear resource scope ('room-level availability, pricing, and amenities'). The phrase 'specific hotel' effectively distinguishes this from sibling search tools like 'skiplagged_sk_hotels_search' by implying a prior selection step.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage context through 'specific hotel' (suggesting a hotel ID from prior search is required), but lacks explicit workflow guidance, alternatives naming, or prerequisites compared to sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

skiplagged_sk_resolve_iataBInspect

[skiplagged] Resolve a city name to a valid IATA code

ParametersJSON Schema
NameRequiredDescriptionDefault
inputYesCity name code to resolve to IATA code
renderModeNoPreferred render mode for tool output (ui or text).
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but fails to mention error handling (what if city not found?), ambiguity resolution (multi-airport cities like 'New York'), return format, or whether matching is fuzzy/exact.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise single sentence with no extraneous information. The '[skiplagged]' prefix is minimal metadata, and the functional description is front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple 2-parameter lookup tool, the description covers the core contract but lacks important context given zero annotations and no output schema—specifically how ambiguous city names are handled and what the return structure looks like.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, documenting both 'input' (city name) and 'renderMode' (output format). The description adds no additional parameter context (e.g., expected city name format), but baseline 3 is appropriate since the schema fully documents the interface.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the transformation (resolve city name → IATA code) with a specific verb and output format. However, it does not explicitly differentiate from the sibling tool 'skiplagged_sk_resolve_location', leaving ambiguity about which resolution tool to use.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like 'skiplagged_sk_resolve_location' or 'airports_lookup'. No mention of prerequisites (e.g., does the city need to be exact match?) or exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

skiplagged_sk_resolve_locationAInspect

[skiplagged] Resolve latitude and longitude to the IATA code of nearest airport and city information

ParametersJSON Schema
NameRequiredDescriptionDefault
latYeslatitude
lngYeslongitude
renderModeNoPreferred render mode for tool output (ui or text).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. It successfully notes the 'nearest airport' selection behavior (returning one result, not a list) and describes the return values (IATA code and city information) compensating for the missing output schema. However, it omits details on search radius, error handling when no airport is nearby, or rate limiting.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence structure with zero wasted words. The information is front-loaded with the provider namespace '[skiplagged]' followed immediately by the verb and resource mapping. Every clause serves to clarify the transformation from input to output.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 simple parameters) and lack of output schema, the description adequately compensates by specifying the return format (IATA code and city information). It successfully communicates the core functionality, though it could be improved by noting the geographic search radius or error response behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage ('latitude', 'longitude', 'Preferred render mode'), establishing a baseline of 3. The description mentions 'latitude and longitude' but does not augment the schema with additional semantic details such as valid coordinate ranges, decimal precision requirements, or examples. It meets but does not exceed the schema's semantic coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description provides specific verb ('Resolve'), clear input resource ('latitude and longitude'), and specific output ('IATA code of nearest airport and city information'). The '[skiplagged]' prefix distinguishes the provider, and the explicit lat/lng input clearly differentiates it from sibling tool 'skiplagged_sk_resolve_iata' which likely performs the inverse operation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by specifying coordinate inputs, indicating it should be used when exact latitude/longitude are available. However, it lacks explicit guidance on when to use this versus siblings like 'airports_near' or 'skiplagged_sk_resolve_iata', and does not mention exclusions or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tourradar_b2b-continents-listAInspect

[tourradar] Use this when you need continent IDs for filtering tours by region.

Returns a list of all supported continents with their IDs and names for use in tour search filters.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It correctly identifies the operation as read-only by stating it 'Returns a list,' and specifies the payload contents (IDs and names). However, it lacks details on rate limits, caching behavior, authentication requirements, or error conditions that would fully inform an agent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two efficient sentences with zero waste. It is front-loaded with usage context ('Use this when...'), followed by behavioral specifics ('Returns a list...'). Every sentence earns its place by providing distinct value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the low complexity (zero parameters, simple list return) and absence of an output schema, the description adequately compensates by explaining what data is returned (continents with IDs and names) and its purpose (tour search filters). It is complete enough for a lookup utility, though it could benefit from mentioning data freshness or pagination.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters (empty object). According to the evaluation baseline, tools with zero parameters receive a baseline score of 4, as there are no parameter semantics to elaborate upon in the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool 'Returns a list of all supported continents with their IDs and names,' providing a specific verb (returns/list) and resource (continents). It clearly distinguishes itself from sibling tools like tourradar_b2b-countries-list by specifying 'continent IDs for filtering tours by region' versus country or city lookups.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The opening sentence '[tourradar] Use this when you need continent IDs for filtering tours by region' provides explicit when-to-use guidance. However, it lacks explicit when-not-to-use guidance or named alternatives (e.g., it doesn't mention to use countries-list for country-level filtering instead).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tourradar_b2b-countries-listAInspect

[tourradar] Use this when you need to look up country IDs for filtering tours or validating country names.

Returns a complete list of all supported countries with their IDs, names, and ISO country codes.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses return structure (IDs, names, ISO codes) and implies it's a read operation, but lacks details on data freshness, payload size, caching, or rate limits that would help an agent understand operational characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. First sentence provides usage guidance, second describes return values. Efficient structure with critical information front-loaded (including the [tourradar] namespace tag).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter reference tool without output schema, the description adequately covers the return structure (IDs, names, ISO codes) and use cases. Minor gap regarding data volume or update frequency prevents a 5.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 0 parameters. With zero parameters, baseline score is 4 as no parameter semantic clarification is required or possible.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool looks up 'country IDs for filtering tours or validating country names' and returns 'a complete list of all supported countries.' Specific verb+resource combination distinguishes it from sibling tools like cities-search, continents-list, and tour-search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use ('when you need to look up country IDs for filtering tours or validating country names'), providing clear context for selection. Lacks explicit 'when not to use' or named alternative tools, preventing a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tourradar_b2b-currencies-listAInspect

[tourradar] Use this when you need to check supported currencies or get currency details like symbols.

Returns a list of all supported currencies with their codes (USD, EUR), names, and symbols ($, €).

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adequately describes the return value structure ('list of all supported currencies with their codes... names, and symbols'), but lacks safety indicators (read-only status, side effects, rate limits) that would help an agent understand operational constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of exactly two high-value sentences: one for usage conditions and one for return value specification. The '[tourradar]' prefix serves as useful namespacing metadata. No redundancy or filler text is present.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (zero parameters, no output schema), the description provides sufficient context by detailing the return format (codes, names, symbols). For a static lookup endpoint, this coverage is adequate, though mention of B2B-specific constraints would elevate it to a 5.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters. According to scoring rules, 0 parameters establishes a baseline score of 4. The description correctly does not invent parameters, maintaining alignment with the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'check[s] supported currencies or get[s] currency details like symbols,' providing a specific verb + resource combination. It distinguishes from siblings like tourradar_b2b-countries-list and tourradar_b2b-languages-list by explicitly focusing on currency codes and symbols.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description opens with explicit usage guidance: 'Use this when you need to check supported currencies...' providing clear when-to-use context. While it does not explicitly state when NOT to use it or name alternatives, no sibling tools provide currency lookup functionality, making this omission minor.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tourradar_b2b-languages-listAInspect

[tourradar] Use this when you need language IDs for filtering tours by guide language.

Returns a list of all supported languages with their IDs, codes, and names for use in tour search filters.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully compensates for the missing output schema by describing the return structure (IDs, codes, and names). However, it lacks other critical behavioral context such as rate limits, caching behavior, authentication requirements, or explicit confirmation that this is a read-only operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two highly efficient sentences. The first establishes the use case context, and the second describes the return payload. There is no redundant text, and the '[tourradar]' prefix effectively namespaces the tool without cluttering the descriptive content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (zero parameters, simple list return), the description is nearly complete. It explains the return values since no output schema exists and connects the tool to its ecosystem (tour search filters). It could be improved by explicitly naming the specific parameters in sibling tools that accept these language IDs.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters, establishing a baseline score of 4. The description correctly omits parameter discussion since none exist, requiring no additional semantic clarification beyond what the empty schema already communicates.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns a list of supported languages with their IDs, codes, and names. It specifies the exact resource (languages) and action (returns/list), and distinguishes itself from siblings by clarifying its specific purpose: obtaining language IDs for filtering tours by guide language.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit when-to-use guidance ('Use this when you need language IDs for filtering tours by guide language') and context ('for use in tour search filters'). However, it does not explicitly name sibling tools (like tourradar_vertex-tour-search) that consume these IDs or describe when not to use this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tourradar_b2b-operator-detailsAInspect

[tourradar] Use this when the user wants information about a tour operator, such as their terms and policies.

Fetches operator details including name, code, and terms & conditions based on operator ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
operatorIdYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It compensates partially by listing specific return fields (name, code, terms), but omits safety classification (read-only vs destructive), error handling (invalid operatorId), or rate limiting context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with clear separation of usage context (first) and functional description (second). The '[tourradar]' tag at the start is unnecessary clutter given the tool name convention, but overall efficient without redundant phrasing.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple single-parameter lookup tool. Covers the essential contract: input (operatorId), output fields (name, code, terms), and trigger condition. Missing only the origin/source of valid operator IDs, which would complete the usage loop.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0% (operatorId lacks description). The text mentions 'based on operator ID' which maps to the parameter, but fails to fully compensate by explaining what constitutes a valid operator ID or where to obtain it (e.g., from search results). Just meets baseline expectations.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the resource (tour operator) and specific data retrieved (name, code, terms & conditions). It implicitly distinguishes from siblings like 'tourradar_b2b-tour-details' (tours vs operators) and 'tourradar_algolia-operator-search' (lookup by ID vs search). The '[tourradar]' prefix is redundant noise.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides a clear trigger ('when the user wants information about a tour operator'), but fails to mention the critical sibling relationship with 'tourradar_algolia-operator-search'—users typically need to search/find operators before looking up details by ID. No 'when not to use' guidance provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tourradar_b2b-tour-departuresAInspect

[tourradar] Use this when the user asks about available departure dates, pricing for specific dates, or needs to validate if a departure date is available.

Returns a list of departures for a specific tour within a date range, including availability status, pricing, and booking information.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number. Default is 1
tourIdYesTour ID
dateRangeYesReturns only departure dates in the desired range
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully describes the return payload ('list of departures... including availability status, pricing, and booking information'), but fails to disclose operational traits like read-only safety, pagination limits, or error handling behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two highly efficient sentences. The first is front-loaded with the trigger condition, and the second explains the return value. No redundancy or filler text exists; every word serves a distinct purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description adequately compensates by detailing what the tool returns (departures with availability, pricing, and booking info). Minor gap: it does not mention that tourId typically comes from companion search tools, but overall coverage is sufficient for a list/query tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all three parameters (tourId, dateRange, page). The description references 'specific tour' and 'date range' which align with parameters, but adds no additional semantic context (e.g., date format details, pagination behavior) beyond what the schema already provides. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the tool's purpose with specific verbs ('returns') and resources ('departures', 'availability status', 'pricing'). It distinguishes from siblings like tourradar_b2b-tour-details (general info) and tourradar_web-tour-booking (reservation) by focusing specifically on listing available departure dates within a range.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The first sentence provides explicit when-to-use guidance ('when the user asks about available departure dates, pricing for specific dates, or needs to validate if a departure date is available'). However, it lacks explicit when-NOT-to-use guidance or mention of prerequisite steps (e.g., obtaining tourId from search tools).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tourradar_b2b-tour-detailsBInspect

[tourradar] Use this when the user wants to see detailed information about a specific tour.

Fetches comprehensive tour details including itinerary, pricing, operator info, images, and booking links based on tour ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
tourIdYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description must carry the full burden. It compensates partially by listing return fields (itinerary, pricing, images, booking links), but fails to disclose safety properties (read-only vs destructive), authentication requirements, or error conditions. 'Fetches' implies a read operation, but this isn't explicitly confirmed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no wasted words. The '[tourradar]' prefix serves as useful namespacing. The structure front-loads the usage trigger and follows with the technical capability. Slight deduction for the bracketed prefix being slightly informal.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a single-parameter tool with no output schema, as it describes the return payload contents. However, gaps remain: it doesn't explain the parameter's semantics despite zero schema documentation, and doesn't clarify relationships with the numerous sibling tourradar tools that also require tour IDs.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 0% description coverage for the tourId parameter. The description mentions 'based on tour ID' which acknowledges the parameter's existence, but doesn't explain what a tour ID represents, where to obtain it (e.g., from vertex-tour-search), or the exclusiveMinimum constraint. Barely meets minimum viability for undocumented schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it 'Fetches comprehensive tour details' with specific examples (itinerary, pricing, operator info), distinguishing it from general search tools. However, it doesn't explicitly differentiate from sibling tour-specific tools like tour-departures or tour-faq.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides a clear 'Use this when...' trigger for detailed tour information, but lacks explicit guidance on when NOT to use it (e.g., for availability dates use tour-departures instead) and doesn't mention the prerequisite of obtaining a tourId from search tools first.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tourradar_b2b-tour-faqAInspect

[tourradar] Use this when the user has questions about a tour that might be answered in the FAQ section.

Returns a paginated list of frequently asked questions and answers about a specific tour, covering topics like inclusions, requirements, and policies.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number. Default is 1
tourIdYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It successfully discloses pagination behavior ('paginated list') and content scope ('covering topics like inclusions, requirements, and policies'), but lacks information on rate limits, error handling, or data freshness.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two efficient sentences with zero waste. It is front-loaded with usage context ('Use this when...') followed by return value description, making it easy for an agent to quickly assess relevance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simple 2-parameter schema and lack of output schema, the description adequately covers the essential usage trigger and return format. However, it could improve by explicitly describing the required tourId parameter and pagination limits.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 50% (page is documented, tourId is not). The description implies the tourId parameter through 'specific tour' but does not explicitly document its semantics, format, or how to obtain it. This meets baseline expectations for partial coverage without full compensation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns a 'paginated list of frequently asked questions and answers about a specific tour' (specific verb+resource). It distinguishes itself from sibling tools like tourradar_b2b-tour-details by explicitly scoping to FAQ content rather than general tour information.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description opens with explicit when-to-use guidance: 'Use this when the user has questions about a tour that might be answered in the FAQ section.' However, it does not explicitly name alternatives (e.g., when to prefer tourradar_b2b-tour-details over this tool) or provide exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tourradar_b2b-tour-mapAInspect

[tourradar] Use this when the user wants to see tour routes on a map or compare itineraries visually.

Fetches tour details for one or more tours and prepares map data with all itinerary locations, coordinates, and day-by-day information for visual display.

ParametersJSON Schema
NameRequiredDescriptionDefault
tourIdsYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses the data transformation aspect (preparing map data with day-by-day coordinates) but omits safety profiles (read-only vs destructive), output format details, or rate limiting context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. The first sentence front-loads the usage condition, while the second explains the technical behavior. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the single-parameter simplicity and lack of output schema, the description adequately covers the tool's specific niche (map visualization). However, with 0% schema coverage, it should ideally explain the tourId format or sourcing to be complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, requiring the description to compensate. It references 'one or more tours' which aligns with the tourIds array parameter and minItems: 1 constraint, providing basic semantic context. However, it does not explain what tourIds represent or where to obtain them.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Fetches', 'prepares') and clearly identifies the resource (tour map data with itinerary locations and coordinates). It effectively distinguishes from sibling tourradar_b2b-tour-details by emphasizing 'map', 'visual display', and 'coordinates'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The first sentence provides explicit when-to-use guidance ('when the user wants to see tour routes on a map or compare itineraries visually'). However, it does not explicitly name the alternative tool (tour-details) for non-visual data retrieval.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tourradar_b2b-tour-types-listAInspect

[tourradar] Use this when you need tour type IDs for filtering tours by category like adventure, cultural, or wildlife.

Returns a hierarchical list of tour type groups and their individual types with IDs and names.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It adds valuable behavioral context by specifying the return structure is 'hierarchical' (groups containing individual types). However, it omits other behavioral traits like idempotency, potential data volume, or error conditions that would be expected for a read-only reference tool without annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient two-sentence structure: front-loaded with use case (when you need IDs for filtering) followed by return value specification (hierarchical list). No redundant words; every clause delivers essential disambiguating information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter lookup tool, the description adequately compensates for the missing output schema by describing the hierarchical return structure and included fields (IDs and names). A 5 would require either an output schema or richer detail about the data format/limits.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, warranting the baseline score of 4. The description appropriately focuses on return value semantics rather than inventing parameter documentation where none exists.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool returns 'tour type IDs for filtering tours by category' with concrete examples (adventure, cultural, wildlife), distinguishing it from sibling tour search tools (tourradar_vertex-tour-search) and other reference lists (cities, countries). The '[tourradar]' prefix identifies the domain.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit 'Use this when...' guidance for obtaining filter IDs, establishing clear context for when an agent needs categorical taxonomy vs. specific tour details. Lacks explicit 'when not to use' or named alternatives, though the functional distinction from search tools is implied.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tourradar_general-current-dateAInspect

[tourradar] Use this when you need to know the current date, especially before setting departure date filters.

Returns the current date and time in ISO format. Essential for calculating valid date ranges for tour searches.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It specifies the ISO format return type, which is valuable, but omits other behavioral traits like timezone handling (UTC vs local), idempotency, or whether the result is cached.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of three dense sentences with zero redundancy. It front-loads the usage context ('Use this when...') before detailing the return format and operational importance, ensuring every clause provides actionable information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (zero parameters, no output schema), the description adequately covers the essential context: return format (ISO), use case (date filter calculations), and namespace. A minor gap is the lack of timezone specification, but this is sufficient for agent operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters, establishing a baseline of 4. The description appropriately does not invent parameters, maintaining consistency with the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Returns the current date and time' and specifies the '[tourradar]' namespace, distinguishing it from sibling travel search tools like 'tourradar_vertex-tour-search' which perform searches rather than returning temporal reference data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides explicit temporal context for usage ('especially before setting departure date filters' and 'Essential for calculating valid date ranges'), clearly indicating when to invoke it relative to tour search operations, though it does not explicitly name alternative tools to avoid.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tourradar_web-tour-bookingAInspect

[tourradar] Use this when the user explicitly wants to book a tour and has provided their contact details.

Creates a booking request for a specific tour. Requires real contact details (email, names; phone optional). Always confirm the departure date by checking departures first. Never fabricate or assume user data - always ask for missing information.

ParametersJSON Schema
NameRequiredDescriptionDefault
emailYesCustomer email address. If email is not given by the customer, please ask him before executing this endpoint. Never generate email by yourself, always ask customer for give the email before calling this tool.
tourIdYesTour ID
lastNameYes
firstNameYes
paxAdultsYesNumber of the adults. Always ask customer for this information, is important to have it
paxChildrenYesNumber of the children
phoneNumberNoCustomer phone number in E-164 format. Is optional, but recommended to provide it. Never put fake phone number, always ask customer for give the phone number before calling this tool. If customer do not gives phone number, just skip this field
departureDateYesDeparture date, from customer selection
agentInformationYesAgent information about the booking. Describe here what customer is looked for, preferences etc, what you can recognize. Will be helpful for our Customer Support team to understand the customer needs.
extraInformationYesCustomer extra information from the customer or agent. Fill here if customer has provided any additional information about the booking. This is optional and can be empty.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses data integrity requirements well ('Requires real contact details', 'Never fabricate or assume user data'). States it creates a booking request. However, fails to disclose operational side effects: whether this triggers payment processing, confirmation emails, availability holds, or what error states look like. For a mutation tool with no annotations, this is a moderate gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Exceptionally well-structured and front-loaded. Opens with activation condition, follows with core action, then requirements, prerequisite workflow, and data constraints. Four sentences with zero waste—every clause provides distinct value (when to use, what it does, required data, workflow step, validation rule).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 9 required parameters and no output schema/annotations, the description adequately covers input prerequisites (departure date verification, contact collection) and data constraints. However, it completely omits what the tool returns or what happens after invocation (confirmation number? email sent? booking status?). For a complex booking operation, this is a significant gap in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 80% (high), establishing baseline 3. The description adds workflow context beyond schema: 'phone optional' clarifies the optional parameter among required contact fields, and 'Never fabricate... always ask for missing information' provides validation constraints not captured in JSON schema. It mentions 'names' compensating slightly for firstName/lastName lacking schema descriptions, but doesn't add format details or semantics for agentInformation/extraInformation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it 'Creates a booking request for a specific tour' with specific verb and resource. The prefix '[tourradar]' and the explicit distinction from search operations (via 'when the user explicitly wants to book') effectively differentiates it from sibling search tools like tourradar_vertex-tour-search. However, it could clarify whether this completes a purchase or merely submits an inquiry.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit activation criteria: 'when the user explicitly wants to book a tour and has provided their contact details.' Includes critical prerequisite workflow: 'Always confirm the departure date by checking departures first' (implicitly referencing tourradar_b2b-tour-departures). Lacks explicit naming of alternative tools for searching vs booking, though the intent is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tourradar_web-tour-send-brochureAInspect

[tourradar] Use this when the user requests a tour brochure or PDF to be sent to their email.

Sends a tour brochure (PDF file) to the specified email address. Before using this, ensure the user has provided their email address. If not provided, ask the user for it. Never generate or assume email addresses.

ParametersJSON Schema
NameRequiredDescriptionDefault
emailYesUser email address. If email is not given by the user, please ask him before executing this endpoint. Never generate email by yourself, always ask user for give the email before calling this tool.
tourIdYesTour ID
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that the tool sends an email with a PDF and emphasizes the critical constraint that email addresses must be user-provided (never assumed). However, it lacks disclosure of idempotency, rate limits, error handling for invalid emails, or whether this action registers the user for marketing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with the trigger condition front-loaded ('Use this when...'). The content is efficient with minimal waste, though slightly redundant between the first sentence ('requests a tour brochure...be sent') and second ('Sends a tour brochure'). Overall, every sentence earns its place by providing distinct guidance on purpose, action, or constraints.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's limited complexity (2 primitive parameters, no output schema), the description is appropriately complete. It covers the essential safety-critical constraint (email verification) that would otherwise be missing. A perfect score would require additional behavioral details like error handling or rate limiting, but the coverage is sufficient for safe invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description adds valuable usage context beyond the schema by reinforcing the conversational workflow: 'ensure the user has provided their email address' and 'ask him before executing'. This contextualizes the raw parameter requirements into agent behavior guidance.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool 'Sends a tour brochure (PDF file) to the specified email address' using a specific verb and resource. It clearly distinguishes from sibling tools like tourradar_web-tour-booking or tourradar_b2b-tour-details by focusing on brochure/PDF delivery via email rather than booking or retrieving tour metadata.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use ('when the user requests a tour brochure or PDF'), prerequisites ('ensure the user has provided their email address'), and prohibitions ('Never generate or assume email addresses'). It also includes actionable guidance on handling missing data ('If not provided, ask the user for it').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

travel_agentAInspect

Ask a travel question in natural language. Routes to the right tools automatically and returns a combined answer. Example: 'Cheapest flights from Zurich to Rome next week, and do I need a visa?'

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and successfully discloses the core orchestration behavior ('Routes to the right tools automatically') and aggregation ('returns a combined answer'). It does not mention error handling if no tools match or latency implications of the routing layer.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences followed by a high-value example. The first sentence establishes the natural language interface, the second explains the routing mechanism, and the example immediately clarifies the multi-tool capability without redundant exposition.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema and the tool's role as an orchestrator across dozens of siblings, the description adequately establishes it as the primary natural language entry point. It could be improved by enumerating the domains covered (flights, hotels, visas, etc.), but the example implies broad coverage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage (only title 'Query'). The description compensates effectively by specifying the parameter accepts 'natural language' and provides a concrete example showing the expected syntax and complexity level, though it omits constraints like length limits.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool accepts natural language travel questions and automatically routes to appropriate backend tools, distinguishing it from the 40+ specific API siblings (e.g., kiwi_search-flight, visa_check) which require structured inputs.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The example query ('Cheapest flights... and do I need a visa?') effectively demonstrates the intended use case for complex, multi-domain questions that would otherwise require multiple tool calls. However, it lacks explicit guidance on when to use specific single-purpose tools instead.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

trivago_trivago-search-suggestionsCInspect

[trivago]

Suggestions are used to provide a list of possible search terms based on the user's query.

Query can be city, country.

You must pick output that are close to the user query.

Example:
Input:
	Query: "Berlin"
	Query: "Germany"
ParametersJSON Schema
NameRequiredDescriptionDefault
queryYes The query to search for suggestions. Query must be city, country. if you know geolocation, you can use radius search tool to find accommodations near the location. if query or the location is ambiguous, clarify the query or location by asking the user for more information. When user ask for a query, you must follow these steps. If each step is not successful, try the next step: 1. first try to use query as it is 2. MUST find the city of the query by using internet search, use MUST the city to search for suggestions 3. MUST find the country of the query by using internet search, use MUST the country to search for suggestions
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but offers minimal detail. It vaguely states 'You must pick output that are close to the user query' without explaining the matching algorithm, error conditions, or return format (critical given no output schema exists).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is relatively short but contains structural noise like the '[trivago]' prefix and passive phrasing ('are used to'). The sentence 'You must pick output that are close to the user query' is grammatically awkward and vague. However, it avoids excessive length and places examples at the end.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, the description should explain what types of suggestions are returned (locations, hotels, landmarks?) and how they relate to the accommodation search workflow. It provides neither, leaving significant gaps in contextual understanding for an agent trying to complete a multi-step booking task.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description adds concrete examples ('Berlin', 'Germany') that illustrate expected input values, but does not add semantic meaning beyond what the detailed schema description already provides regarding the query parameter's purpose or format.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool provides 'a list of possible search terms based on the user's query' and specifies 'city, country' as valid query types. However, it fails to distinguish this suggestion/autocomplete tool from sibling tools like 'trivago_trivago-accommodation-search', leaving ambiguity about when to use suggestions versus direct accommodation searches.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

There is no guidance on when to use this tool versus alternatives. While the schema contains logic for handling ambiguous queries (using internet search), the description itself provides no workflow guidance, prerequisites, or comparisons to the other trivago accommodation search tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

visa_checkCInspect

Check visa requirement for a passport country visiting a destination.

ParametersJSON Schema
NameRequiredDescriptionDefault
passportYes
destinationYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. While 'Check' implies a read-only operation, the description fails to specify expected input formats, data freshness, rate limits, or whether the tool validates entry requirements for specific travel dates.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single-sentence description is front-loaded with the action verb and contains no redundant or wasteful language. Every word serves a purpose. However, given the complete lack of schema documentation, the description is arguably too brief to be appropriately sized for the complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While an output schema exists (reducing the need to describe return values), the description is incomplete due to critical gaps in input specification. For a visa checking tool, the distinction between 'US', 'USA', and 'United States' is vital, and the absence of format guidance alongside 0% schema coverage leaves the tool under-specified.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description must compensate but only minimally does so. It identifies that 'passport' refers to a country and 'destination' is a location, but critically omits expected formats (ISO 3166-1 alpha-2 codes, IATA codes, or full country names), which is essential for correct invocation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Check') and identifies the resources ('visa requirement', 'passport country', 'destination'), making the core function clear. However, it fails to distinguish from the sibling tool 'visa_summary', leaving ambiguity about which visa tool to use when.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus 'visa_summary' or other alternatives. No mention of prerequisites like required country code formats (ISO vs. full names) or when visa checks are unnecessary (e.g., domestic travel).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

visa_summaryBInspect

Overview of visa-free access for a passport country — counts by category.

ParametersJSON Schema
NameRequiredDescriptionDefault
passportYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It adds value by indicating the output structure ('counts by category'), suggesting categorized aggregate data. However, it omits details about data freshness, caching, or whether the counts include visa-on-arrival vs visa-free distinctions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no redundant words. It front-loads the core purpose (overview of visa-free access) and qualifies it (counts by category). However, given the lack of schema descriptions and annotations, this brevity may be insufficient for correct invocation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with an output schema, the description adequately covers the core function. However, it falls short of complete guidance due to ambiguous parameter format requirements and failure to distinguish usage from 'visa_check,' which could lead to incorrect tool selection by an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 0% description coverage (parameter title only). The description adds semantic meaning by referring to 'passport country,' clarifying that the input represents the issuing country. However, it fails to specify the expected format (e.g., ISO 3166-1 alpha-3 code vs. full country name), which is critical given the complete lack of schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it provides an 'Overview of visa-free access' with 'counts by category,' specifying the resource and aggregation type. However, it lacks explicit differentiation from the sibling 'visa_check' tool, which likely handles specific visa requirement queries rather than aggregate counts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus the sibling 'visa_check' tool. It does not specify use cases (e.g., 'use for aggregate statistics' vs 'use for specific destination checks') or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.