Award Travel Finder
Server Details
Search award flight availability, points pricing, and status matches from your AI assistant.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 22 of 22 tools scored. Lowest: 3.4/5.
Most tools target distinct actions (e.g., search vs. pricing vs. booking tracking). There is minor overlap between get_pricing and get_program_rates, but their inputs differ (route vs. program), so agents can typically disambiguate.
Tools consistently use verb_noun pattern (add_, get_, search_, etc.). Some names are slightly awkward (e.g., discover_more_flight_tools, get_buy_points_pricing), and there is a mix of singular/plural nouns, but overall pattern is clear.
22 tools is on the higher side for a single domain, bordering on heavy. While each tool seems justified, the count suggests the server may be trying to cover too many sub-domains (flights, hotels, points, promos) at once.
The tool set covers core award travel workflows: search, pricing, booking tracking, and points management. However, it lacks flight price monitoring (only hotel monitoring exists) and point transfer capabilities, which are notable gaps for the domain.
Available Tools
22 toolsadd_flight_bookingAInspect
Log an award flight booking/redemption. Use this to track points spent on flights. Works great with Gmail — extract booking details from confirmation emails and add them here. Paid feature.
| Name | Required | Description | Default |
|---|---|---|---|
| notes | No | Additional notes about the booking | |
| origin | Yes | Departure airport IATA code (e.g. 'LHR') | |
| airline | Yes | Airline name (e.g. 'British Airways', 'Qatar Airways') | |
| program | Yes | Loyalty program used (e.g. 'british airways', 'qatar airways', 'aeroplan') | |
| passengers | No | Number of passengers (default: 1) | |
| taxes_paid | No | Taxes/fees paid in cash (USD) | |
| cabin_class | Yes | Cabin class | |
| destination | Yes | Arrival airport IATA code (e.g. 'JFK') | |
| return_date | No | Return date for round trips (YYYY-MM-DD) | |
| points_spent | Yes | Total points/miles used for this booking | |
| flight_number | No | Flight number (e.g. 'BA115') | |
| departure_date | Yes | Departure date (YYYY-MM-DD) | |
| taxes_currency | No | Currency of taxes paid (default: USD) | |
| confirmation_number | No | Booking confirmation/PNR |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It mentions the 'Paid feature' constraint and the action of logging bookings, but lacks details on validation, duplicate handling, or side effects. More behavioral disclosure is needed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences with a clear front-loaded purpose. It is concise, though the Gmail sentence could be considered promotional. No wasted words, but structure is only slightly above average.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 14 parameters and no output schema, the description is incomplete. It does not explain return values, error conditions, or what happens after a successful booking. The 'Paid feature' note helps but overall guidance is sparse.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with each parameter described. The description adds no additional parameter context beyond the schema, so it meets the baseline but does not enhance understanding of parameter relationships or usage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Log an award flight booking/redemption' to track points spent on flights. It uses specific verbs ('log', 'add') and distinguishes from sibling tools like delete or list by focusing on creation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description suggests a use case with Gmail but does not provide explicit guidance on when to use this tool versus alternatives (e.g., update_flight_booking). No when-not-to-use or comparison to siblings is given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_flight_bookingAInspect
Cancel/remove a tracked flight booking.
| Name | Required | Description | Default |
|---|---|---|---|
| booking_id | Yes | Flight booking ID to cancel |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full burden of behavioral disclosure. It only hints at destructive behavior ('Cancel/remove') but omits details about irreversibility, authorization needs, or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single, clear sentence that is front-loaded with the action and resource. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple delete tool with one parameter and no output schema, the description is minimally adequate but could be improved by noting that deletion is permanent or that certain permissions are required.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema covers 100% of parameters and describes booking_id adequately. The description adds no additional meaning beyond the schema, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb-resource pair ('Cancel/remove a tracked flight booking') and clearly distinguishes from sibling tools like add_flight_booking, get_flight_booking, and update_flight_booking by stating the action.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when a booking needs to be removed, but it does not provide explicit guidance on when to use this tool versus alternatives, nor does it mention prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_more_flight_toolsARead-onlyInspect
Discover other flight & travel MCP servers you can add to your client. Lists complementary remote MCPs covering aircraft seatmaps, airport delays/wait times, and lounges — with one-line install URLs. Call this when the user asks about seat selection, airport delays/security waits, baggage rankings, lounges, or 'what other flight tools are there?'
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already mark as readOnlyHint=true. The description adds context about returning install URLs and coverage areas, but does not contradict or elaborate significantly beyond that.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded purpose, no wasted words. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a parameterless discovery tool with readOnly annotations, the description fully covers purpose, usage context, and output nature (lists of MCPs with install URLs). No output schema needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, and schema description coverage is 100%. The description adds no parameter details, but none are needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool discovers other flight & travel MCP servers, with specific coverage areas (seatmaps, delays, lounges). This verb+resource pair distinguishes it from siblings like booking or pricing tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly lists example user queries that should trigger this tool (seat selection, airport delays, etc.) and includes the catch-all 'what other flight tools are there?'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_buy_points_pricingARead-onlyInspect
Get the cost to buy points/miles for a loyalty program. Returns tiered base purchase pricing and any active bonus promotion. Use to answer 'how much does it cost to buy X Avios/miles/points?' If no program specified, returns all programs with pricing data. Free — no account needed.
| Name | Required | Description | Default |
|---|---|---|---|
| program | No | Program slug (e.g. british-airways, american-airlines, marriott-bonvoy). Omit to list all available programs. | |
| quantity | No | Number of points/miles to price (e.g. 50000). If omitted, returns all pricing tiers for the program. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotations already declare readOnlyHint=true, but the description adds value by stating 'Free — no account needed.' This is behavioral information not present in annotations. The description also discloses that it returns 'tiered base purchase pricing and any active bonus promotion,' giving the agent insight into the response structure. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences, each earning its place: purpose, return info, usage guidance. Front-loaded with the main action. No fluff or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description could explain the return format in more detail (e.g., a list of objects). However, it covers the essential: tiered pricing and bonus promotions. The free/no-account note adds completeness. For a simple data-retrieval tool with clear inputs, this is nearly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (both parameters have descriptions). The description adds value with concrete examples like 'british-airways' and clarifies that omitting quantity returns all tiers. While the schema already covers the logic, the description enriches it with real-world examples and usage hints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get') and resource ('cost to buy points/miles for a loyalty program'). It distinguishes from sibling tools like 'get_pricing' and 'get_program_rates' by specifying 'buy points/miles' and 'loyalty program'. The use of 'How much does it cost to buy X Avios/miles/points?' anchors the purpose precisely.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Use to answer...' providing a clear usage context. It also gives guidance on when to omit parameters ('If no program specified, returns all programs') and highlights that it is free with no account needed. This sets clear expectations and alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_flight_bookingARead-onlyInspect
Get details of a specific flight booking/redemption.
| Name | Required | Description | Default |
|---|---|---|---|
| booking_id | Yes | Flight booking ID from add_flight_booking or list_flight_bookings |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true, consistent with 'Get details'. Description adds no extra insight beyond safe read behavior; no mention of auth, rate limits, or response format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single, short sentence with no extraneous words. Front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple retrieval tool with one parameter and no output schema, the description is adequate but could mention that output contains full booking details. Lacks minor completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage with booking_id description referencing source IDs. Description adds no additional parameter meaning, so baseline 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states verb 'Get details' and resource 'specific flight booking/redemption'. Distinguishes from siblings like list_flight_bookings and add_flight_booking by focusing on a single booking.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use versus alternatives. It's implied for retrieving a single booking, but lacks exclusions or context for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_hotel_availabilityARead-onlyInspect
Get current points availability for a hotel on specific dates. Use BEFORE monitor_hotel_price to find the correct rate_plan and current points rate. Paid feature.
| Name | Required | Description | Default |
|---|---|---|---|
| hotel_code | Yes | Hotel property code from search_hotels (e.g. 'PPTBNCI', 'BOBXR') | |
| check_in_date | Yes | Check-in date (YYYY-MM-DD) | |
| check_out_date | Yes | Check-out date (YYYY-MM-DD) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true. The description adds the 'Paid feature' disclosure, which is valuable beyond annotations. However, it does not elaborate on other behavioral traits like request limits or idempotency. With annotations covering the safety profile, a 3 is appropriate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long, front-loading the core action and including usage guidance and cost implication. Every sentence serves a purpose with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema, the description could explain the return format but does not. However, for a simple availability check, the context provided (paid feature, prerequisite role) is sufficient for an agent to use it correctly. The annotations and schema cover the rest.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all three parameters. The description does not add parameter-specific meaning beyond what is in the schema, maintaining the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'current points availability' for a hotel on specific dates, using specific verbs and resources. It differentiates from siblings by positioning itself as a prerequisite for monitor_hotel_price.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Use BEFORE monitor_hotel_price' and identifies itself as a 'Paid feature', providing clear context on when to use and an alternative tool. Does not specify when not to use, but the single-purpose design implies limited scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_hotel_bookingARead-onlyInspect
Get details and full price history of a monitored hotel booking.
| Name | Required | Description | Default |
|---|---|---|---|
| booking_id | Yes | Booking ID from monitor_hotel_price or list_hotel_bookings |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true. The description adds that it returns 'full price history', but does not disclose error handling, auth requirements, or other behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single, well-structured sentence with no fluff. Front-loaded with the core purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read tool with one parameter and readOnlyHint, the description adequately covers the purpose and output (details and price history). However, it could mention that the return includes pricing trends if relevant.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage and already describes the parameter's origin. The description adds no additional meaning beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and the resource 'details and full price history of a monitored hotel booking', distinguishing it from sibling tools like get_flight_booking or list_hotel_bookings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use when a booking_id is available, but does not provide explicit when-to-use or when-not-to-use guidance, nor does it mention alternatives like list_hotel_bookings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_portfolioARead-onlyInspect
Get a complete summary of your travel rewards portfolio. Includes points balances, total points spent on hotels and flights, upcoming trips, destinations visited, and monthly spending trends. Paid feature.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, so no contradiction. The description adds value by disclosing it is a 'Paid feature' and listing included data types (spending, trips, trends), which go beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences. First sentence states action and scope, second adds details and restrictions. No wasted words, front-loaded with key purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters, no output schema, and low complexity, the description adequately covers purpose, contents, and the paid restriction. Could optionally mention data freshness or aggregation behavior, but not required for basic usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters with 100% schema coverage. The description adds no parameter info, but none is needed. Baseline of 4 is appropriate as description does not need to compensate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get a complete summary of your travel rewards portfolio' with specific items listed (points balances, spending, trips, destinations, trends). This distinguishes it from sibling tools like list_points_balances, get_flight_booking, etc., by emphasizing comprehensiveness.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for an overview via 'complete summary' and notes 'Paid feature,' but does not explicitly state when to choose this over alternatives (e.g., using list_points_balances for just balances). No when-not-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_pricingGet Award Pricing ChartARead-onlyInspect
Get the award pricing chart for a specific airline route. Shows points required per cabin class (off-peak/peak). No date needed. Paid feature.
| Name | Required | Description | Default |
|---|---|---|---|
| airline | Yes | Airline to check pricing for | |
| arrival_code | Yes | Arrival airport IATA code (e.g. JFK) | |
| departure_code | Yes | Departure airport IATA code (e.g. LHR) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond the readOnlyHint annotation, the description adds that it shows points per cabin class (off-peak/peak), requires no date, and is a paid feature. This fully discloses behavior and cost implications.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no redundancy. Information is efficiently front-loaded and essential.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description adequately covers the tool's purpose and output (points per cabin class) but lacks details on response format or data ranges. Given no output schema, a slightly more detailed example could improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so parameters are well-documented. The description adds no new param-specific details beyond the schema, meeting the baseline expectation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves an award pricing chart for a specific airline route, specifying it shows points per cabin class with off-peak/peak distinction. This differentiates it from sibling tools like get_program_rates or get_buy_points_pricing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'No date needed' indicating usage for static pricing, and 'Paid feature' warns of cost. While it doesn't list alternatives, the context is clear enough for an agent to decide when to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_program_ratesARead-onlyInspect
Get the full award chart rates for a specific loyalty program. Returns all destinations with points required per cabin class including off-peak/peak pricing. Free — no account needed.
| Name | Required | Description | Default |
|---|---|---|---|
| program | Yes | Loyalty program slug (e.g. british-airways, emirates, aeroplan) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The readOnlyHint annotation already indicates read-only behavior. The description adds 'Free — no account needed,' which provides additional transparency about access requirements. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three short sentences, each adding value: purpose, output details, and access info. No redundancy, front-loaded with the main action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple tool (one parameter, no output schema), the description fully explains what the tool returns and any special conditions (free, no account). It provides sufficient context for an agent to use the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage with parameter description, so baseline is 3. The description adds context that the parameter is a 'loyalty program' but does not add significant new information beyond the schema's enum and description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns 'full award chart rates for a specific loyalty program' and specifies the output includes destinations, points per cabin class, and peak/off-peak pricing. This distinguishes it from sibling tools like get_pricing and get_buy_points_pricing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for obtaining award chart rates and mentions it's free with no account needed, but it does not explicitly contrast with sibling tools or provide when-not-to-use guidance. The context is clear enough but not explicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_status_matchesARead-onlyInspect
Get current airline status match offers. Returns active promotions where you can match elite status from one airline to another. Includes direct apply links. Each program also has a detailed FAQ page at awardtravelfinder.com/status-match/{program-slug} with eligibility, requirements, and step-by-step instructions. Free — no account needed.
| Name | Required | Description | Default |
|---|---|---|---|
| alliance | No | Filter by airline alliance |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond the readOnlyHint annotation, description adds that the tool returns active promotions with direct apply links, is free, and directs to FAQ pages. Consistent with read-only behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, each carrying key information: purpose, content, additional resources, and access conditions. No fluff, well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, description fully conveys what the tool returns (active promotions, apply links) and provides additional resources (FAQ). Sufficient for agent selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has one optional parameter with enum (alliance), 100% coverage. Description does not add extra meaning but is consistent. Baseline score of 3 appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states 'Get current airline status match offers' with specifics about returns, apply links, and free access. Distinct from siblings which are about flights, hotels, and bookings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context: returns active promotions for status matching, includes apply links, no account needed. Implicitly tells when to use it, but lacks explicit when-not or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_flight_bookingsARead-onlyInspect
List all tracked flight award bookings/redemptions. Shows airlines, routes, points spent, and cabin classes.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only ('readOnlyHint: true'). Description adds what fields are shown (airlines, routes, points, cabin classes), which is useful context beyond the annotation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose, no extraneous information. Every word adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and no output schema, the description fully covers what the tool does and what it returns. Sibling tools provide sufficient context for differentiation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, so description need not add parameter info. Baseline 4 applies as schema coverage is 100% with zero params.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states 'List all tracked flight award bookings/redemptions', specifying the resource and action. Differentiates from siblings like get_flight_booking (single) and add/delete (mutations).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage for getting an overview of all bookings, but does not explicitly mention when not to use or alternatives. Clear enough for a simple list-all tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_hotel_bookingsARead-onlyInspect
List all hotel bookings being monitored for price drops. Shows current vs. original points rates.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotations already mark readOnlyHint=true, so the description correctly reinforces that this is a read operation. It adds transparency by explaining what the output contains (current vs. original points rates), which is useful beyond the annotation. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loading the action and purpose. Every word adds value, with no redundancy or wasted space.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list tool with no parameters and a clear output described, the description is complete. It covers what the tool does, what it returns, and the context of use. No gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are 0 parameters and the schema coverage is 100%. Per guidelines, 0 parameters gives a baseline of 4. The description does not need to add parameter info, but it does not detract from clarity.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'List', the resource 'hotel bookings', and the specific context 'being monitored for price drops'. It also indicates what is shown ('current vs. original points rates'), making the purpose highly specific and distinguishable from siblings like list_flight_bookings and get_hotel_booking.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool is for viewing hotel bookings monitored for price drops, which provides context relative to siblings like monitor_hotel_price or get_hotel_booking. However, it does not explicitly state when to use this tool versus alternatives or specify any prerequisites or limitations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_points_balancesARead-onlyInspect
List all your loyalty program points/miles balances.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already mark the tool as read-only. The description adds minimal behavioral context (implies all balances are returned), but does not disclose details like completeness or performance. With annotations present, the contribution is acceptable but not exceptional.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence with no unnecessary words. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter, read-only list tool, the description adequately conveys the functionality. No output schema exists, but the return value is implied (balances list). Slightly more detail (e.g., 'for all enrolled programs') could be added, but current version is sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist in the schema, so the baseline is 4 as per rules. The description does not need to add parameter information; the purpose is clear without it.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the action ('List') and the resource ('all your loyalty program points/miles balances'), with enough specificity to distinguish it from sibling tools like update_points_balance or get_portfolio.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide explicit guidance on when to use this tool versus alternatives, such as get_portfolio or search tools. However, given the simplicity (zero parameters, read-only), the context is self-evident.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
monitor_hotel_priceAInspect
Start monitoring a hotel points booking for price drops. Checks every 12 hours and sends email alerts when the points rate decreases. Paid feature.
| Name | Required | Description | Default |
|---|---|---|---|
| notes | No | Additional notes | |
| rate_plan | Yes | Rate plan from get_hotel_availability (must match exactly) | |
| room_type | Yes | Room type from get_hotel_availability | |
| hotel_code | Yes | Hotel property code from search_hotels | |
| check_in_date | Yes | Check-in date (YYYY-MM-DD) | |
| check_out_date | Yes | Check-out date (YYYY-MM-DD) | |
| original_points | Yes | TOTAL points cost (per-night rate x nights) | |
| confirmation_number | No | Booking confirmation number |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It discloses key behaviors: checks every 12 hours, sends email alerts, and is a paid feature. However, it does not disclose whether the tool is read-only or has any side effects, nor does it clarify what triggers the alert (e.g., any decrease or threshold).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise: three sentences covering purpose, frequency, alert mechanism, and cost. Every sentence adds value, and the key action is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having no output schema, the description does not mention what the tool returns upon success (e.g., confirmation or monitoring ID). For a monitoring tool with 8 parameters, this omission leaves the agent unclear about expected outcomes.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so the baseline is 3. The description adds no additional meaning beyond the schema's detailed parameter descriptions (e.g., 'Hotel property code from search_hotels'). No param info in description itself.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific verb ('monitoring') and resource ('hotel points booking'), and includes additional details about frequency and alerts. It effectively distinguishes this monitoring tool from sibling tools like 'get_hotel_availability' or 'list_hotel_bookings'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use the tool (to track price drops on a points booking) but does not explicitly state when not to use it or suggest alternatives. The 'Paid feature' note adds a constraint, but no exclusion guidance is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_all_airlinesSearch All AirlinesARead-onlyInspect
Search award flight availability across ALL supported airlines for a route and date. Searches British Airways, Cathay Pacific, Virgin Atlantic, Alaska Airlines, and American Airlines in parallel. Returns combined results grouped by airline. This is the recommended starting point — use single-airline search only if you need a specific airline. Free tier: 10 economy searches/month per IP. Premium cabins (business/first) require a paid plan.
| Name | Required | Description | Default |
|---|---|---|---|
| date | Yes | Travel date (YYYY-MM-DD) | |
| cabin | No | Cabin class (default: economy). Anonymous callers may only search economy. | |
| arrival_code | Yes | Arrival airport IATA code (e.g. JFK) | |
| departure_code | Yes | Departure airport IATA code (e.g. LHR) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations confirm read-only, and description adds rate limits (10 economy searches/month per IP) and premium cabin requirements, which are behavioral traits beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, no filler. Front-loaded with main purpose, then supporting details. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but description mentions combined results grouped by airline. Adequate for a search tool. Could mention pagination or result limits, but not necessary.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
All parameters are described in schema (100% coverage), but description adds useful context like default cabin class and anonymous caller restrictions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it searches award flight availability across all supported airlines for a route and date, listing specific airlines and stating results are grouped by airline. Distinguishes from other search tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly marks it as the recommended starting point and advises to use single-airline search only for specific needs. Also notes free tier limitations and cabin class restrictions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_availabilitySearch Award Flight AvailabilityARead-onlyInspect
Search award flight availability for a SPECIFIC airline, route, and date. Supported airlines: British Airways (BA), Qatar Airways (QR), Cathay Pacific (CX), Virgin Atlantic (VS), Iberia (IB), Alaska Airlines (AS), American Airlines (AA), Qantas (QF), JetBlue (B6), Frontier (F9), Etihad (EY). Free tier: 10 economy searches/month per IP. Premium cabins require a paid plan.
| Name | Required | Description | Default |
|---|---|---|---|
| date | Yes | Travel date (YYYY-MM-DD) | |
| cabin | No | Cabin class for free-tier gating (default: economy) | |
| airline | Yes | Airline to search | |
| arrival_code | Yes | Arrival airport IATA code (e.g. JFK) | |
| departure_code | Yes | Departure airport IATA code (e.g. LHR) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true, and the description adds context that it searches award availability (not cash), along with usage limits (10 economy searches/month per IP) and premium cabin gating. This adequately informs the agent of behavioral constraints beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences, front-loaded with purpose, followed by supported airlines and limitations. Every sentence adds value with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 parameters with full schema coverage and no output schema, the description covers purpose, limitations, and supported airlines. It lacks details on output format, but this is acceptable without output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description repeats parameter details (specific airline, route, date) but does not add new semantic meaning beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches for award flight availability for a specific airline, route, and date. It lists supported airlines, distinguishing it from sibling tools like 'search_all_airlines' which likely search across multiple airlines.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for specific airline searches and mentions free-tier limits and premium cabin requirements. However, it does not explicitly contrast with alternatives like 'search_all_airlines' or 'search_monthly_availability'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_hotelsARead-onlyInspect
Search for hotels by name, city, or brand. Returns hotel codes needed for availability checks and price monitoring. Supports Marriott, Hilton, and IHG. Paid feature.
| Name | Required | Description | Default |
|---|---|---|---|
| brand | No | Filter by hotel brand | |
| limit | No | Max results (default 10) | |
| query | Yes | Hotel name or city (e.g. 'Conrad Bora Bora', 'JW Marriott Mumbai') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses paid feature, supported brands, and output purpose. No contradiction with readOnlyHint annotation. Could mention rate limits or data freshness.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences, front-loaded with action and output. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description explains return value (hotel codes). Covers key aspects: search criteria, output use, supported brands, paid nature. Could specify output format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema descriptions cover all 3 parameters fully (100% coverage). The description adds no new parameter information beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly describes searching hotels by name/city/brand, returning hotel codes for later use. Distinguishes from sibling tools like get_hotel_availability.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
States that output hotel codes are needed for availability and price checks, implying sequential usage. Mentions paid feature as a cost warning. Lacks explicit when-not-to-use or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_hybridSearch Hybrid Cash+Award FlightsARead-onlyInspect
Find the cheapest way to fly by combining cash tickets with award redemptions into one split-ticket journey. Searches cash fares (Google Flights) and award availability across airlines, then combines the best cash leg with the best award leg via connecting hubs. Best for premium cabins (business/first) on long-haul routes. Paid feature.
| Name | Required | Description | Default |
|---|---|---|---|
| date | Yes | Travel date (YYYY-MM-DD) | |
| cabin | No | Target cabin class (default: business) | business |
| origin | Yes | Origin airport IATA code (e.g. LHR) | |
| destination | Yes | Destination airport IATA code (e.g. JFK) | |
| points_value_cents | No | How you value points in cents/pence per point (default: 1.5) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true, and the description discloses behavioral traits such as searching cash fares and award availability, combining legs via connecting hubs, and being a paid feature. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is 4 sentences, each serving a purpose: stating what it does, how it works, when to use it, and cost. No redundant or unnecessary information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description does not explain return values or pagination. It covers the core functionality and usage context well, but could be more complete by mentioning result format or limitations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
All 5 parameters have descriptions in the input schema (100% coverage). The description adds minimal additional meaning beyond the schema, such as the concept of a split-ticket journey, but does not elaborate on parameter values or formats.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool combines cash and award flights into a split-ticket journey, specifying the verb 'find the cheapest way' and the resource 'cash+award flights'. It distinguishes itself from siblings like search_availability and search_all_airlines by highlighting the hybrid nature.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use: 'Best for premium cabins (business/first) on long-haul routes' and notes it's a 'Paid feature'. However, it does not explicitly state when not to use it or mention alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_monthly_availabilitySearch Monthly Award AvailabilityARead-onlyInspect
Search award flight availability for an entire month. Returns day-by-day availability with points costs. Renders as an interactive rate calendar. Paid feature.
| Name | Required | Description | Default |
|---|---|---|---|
| date | Yes | Month to search (YYYY-MM or YYYY-MM-DD) | |
| airline | Yes | Airline to search | |
| arrival_code | Yes | Arrival airport IATA code (e.g. JFK) | |
| departure_code | Yes | Departure airport IATA code (e.g. LHR) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true, and description adds valuable behavioral traits: returns day-by-day availability with points costs, renders as interactive calendar, and is a paid feature. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three short sentences: purpose, output details, and paid feature. Every sentence adds value, first sentence is front-loaded purpose. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, so description adequately explains return values (day-by-day availability, points costs, interactive calendar) and notes paid nature. Could mention any additional constraints like supported routes, but not necessary.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so parameters are already well-documented. Description does not add new semantics beyond the schema requirements (e.g., date format, airline enum).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool searches award flight availability for an entire month, with specific verb 'Search', resource 'award flight availability', and scope 'entire month'. It distinguishes from siblings like search_availability which likely handles single-day searches.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'Paid feature' signals a usage constraint. The description implies use for monthly planning rather than single-day, but does not explicitly list alternatives or when not to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_flight_bookingBInspect
Update a tracked flight booking. Use to correct details or add confirmation numbers.
| Name | Required | Description | Default |
|---|---|---|---|
| notes | No | ||
| origin | No | ||
| airline | No | ||
| program | No | ||
| booking_id | Yes | Flight booking ID | |
| taxes_paid | No | ||
| cabin_class | No | ||
| destination | No | ||
| return_date | No | ||
| points_spent | No | ||
| flight_number | No | ||
| departure_date | No | ||
| confirmation_number | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full burden. It only states 'update' implying mutation but does not disclose permissions, side effects, or behavior if the booking does not exist. Minimal behavioral context is added.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two front-loaded sentences. However, it could include more essential details without becoming verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 13 parameters, no output schema, and no annotations, the description lacks details on validation, partial updates, return values, and error handling. It is incomplete for a complex tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is only 8% (only booking_id has a description). The description adds no extra meaning for the 12 other parameters, failing to compensate for the low coverage. It does not explain patterns, enums, or how parameters relate to the update action.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action 'Update' and resource 'tracked flight booking', with specific use cases 'correct details or add confirmation numbers'. It effectively distinguishes from sibling tools like add_flight_booking and delete_flight_booking.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for corrections and adding confirmation numbers, providing clear context. However, it does not explicitly state when not to use the tool or mention alternatives, but given the sibling tools, the context is sufficient.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_points_balanceAInspect
Update your points/miles balance for a loyalty program. Creates the program if it doesn't exist, or updates the balance if it does. Use with Gmail to extract balance notifications and keep your portfolio current. Paid feature.
| Name | Required | Description | Default |
|---|---|---|---|
| balance | Yes | Current points/miles balance | |
| program | Yes | Loyalty program name (e.g. 'british airways', 'marriott bonvoy', 'hilton honors', 'american express') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It discloses the create-or-update side effect and notes it is a paid feature. However, it omits details like required permissions, reversibility, or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three efficient sentences front-load the core action, explain behavior, and provide practical guidance. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given two simple parameters, no output schema, and no annotations, the description adequately covers purpose, behavior, and usage context. It could mention the response format or success indication, but is otherwise complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for both parameters. The description adds no new parameter-specific details beyond the schema, maintaining the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Update') and resource ('points/miles balance for a loyalty program'), clearly distinguishing it from siblings like list_points_balances and get_portfolio. It also clarifies the create-or-update behavior.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage context ('Use with Gmail to extract balance notifications'), implying when to apply. However, it does not explicitly state when not to use it or mention alternative tools for read-only operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!