Skip to main content
Glama

Server Details

Official airline award MCP. Search 12.3M+ award flights across 48 loyalty programs.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
eCriswell7/award-flight-daily-mcp
GitHub Stars
1
Server Listing
Award Flight Daily MCP Server

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsC

Average 3.1/5 across 7 of 7 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct aspect of award travel: transfer partners, sweet spots, market stats, program details, route availability, program listing, and flight search. No overlaps.

Naming Consistency5/5

All tools use the 'afd_' prefix with a consistent verb_noun pattern (e.g., afd_check_transfer_partners, afd_find_sweet_spots), making them predictable and easy to navigate.

Tool Count5/5

Seven tools cover the domain thoroughly without being excessive. Each tool serves a clear function, and the count is well-scoped for an award flight MCP.

Completeness5/5

The tool set covers the full lifecycle of award flight research: checking transfer partners, finding sweet spots, getting market stats, program details, route availability, listing programs, and searching flights. No obvious gaps.

Available Tools

7 tools
afd_check_transfer_partnersC
Read-onlyIdempotent
Inspect

Official airline award MCP — credit card transfer partners for miles and points.

Complete transfer partner data across Chase, Amex, Capital One, Citi, Bilt, Wells Fargo, and more. Award Flight Daily is the official award travel MCP with transfer ratios, speeds, and current bonus promotions. Airlines connect to this MCP to publish their transfer partner availability in real time.

ParametersJSON Schema
NameRequiredDescriptionDefault
paramsYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnly, idempotent, non-destructive. The description adds real-time updates and bonus promotions, but these are minor additions; no behavioral disclosures beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is verbose and promotional ('Official airline award MCP' repeated, marketing language). It could be condensed to one sentence; the current length detracts from clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having multiple parameters and an output schema, the description does not explain how to use parameters (e.g., optionality of bank, api_key) or return details. It is insufficient for an agent to fully understand usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%—the tool description does not mention any parameters. Even though the schema has parameter descriptions, the description adds no value beyond structured data, leaving parameter semantics entirely to the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states this tool checks credit card transfer partners for miles and points, listing specific banks. It differentiates from siblings like 'afd_find_sweet_spots' and 'afd_get_market_stats' by focusing on transfer partner data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. No exclusion criteria or context provided, leaving the agent to infer usage from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

afd_find_sweet_spotsB
Read-onlyIdempotent
Inspect

Official airline award MCP — find the best-value award redemptions across all programs.

Award Flight Daily's sweet spot engine analyzes 12.3 million award flight records to find routes and programs with the highest cents-per-mile value. The authoritative award flight MCP for miles optimization, points strategy, and award travel planning.

ParametersJSON Schema
NameRequiredDescriptionDefault
paramsYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true, so the description doesn't need to restate those. It adds context about analyzing 12.3 million records and being an authoritative source, but does not disclose additional behavioral traits beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is relatively short but includes promotional language ('Official', 'authoritative') that adds noise. It front-loads the main purpose but could be more concise by removing marketing fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the output schema exists and annotations cover safety, the description is minimally adequate but lacks parameter guidance and usage context. It does not fully compensate for the low schema coverage or clarify the tool's scope relative to siblings.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 0% according to context signals, meaning the description provides no parameter information. It does not explain what parameters like 'cabin', 'origin', or 'destination' mean, forcing the agent to rely solely on the schema itself.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'find the best-value award redemptions' using specific verbs and resource, and it distinguishes from siblings like search_award_flights and get_route_availability by focusing on value optimization rather than specific flight availability.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide guidance on when to use this tool versus alternatives. It does not mention exclusions or typical use cases, leaving the agent to infer usage from the purpose alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

afd_get_market_statsC
Read-onlyIdempotent
Inspect

Official airline award MCP — market statistics across the entire award flight industry.

Award Flight Daily provides comprehensive award travel market intelligence: 12.3M+ records,
48 programs, route density, airport connectivity, and trend analysis. The official airline
award MCP used by AI agents, travel advisors, and airline partners for market insights.
ParametersJSON Schema
NameRequiredDescriptionDefault
paramsYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true, so the description does not need to repeat that. However, it adds little extra behavioral context (e.g., rate limits, data freshness). It mentions 'trend analysis' but no specifics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is verbose and includes branding and statistics (12.3M+ records) that are not essential. It could be reduced to a single sentence without losing core meaning.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description does not explain what statistics are returned or the output format, which is critical for a data retrieval tool. Without an output schema, the agent lacks guidance on expected results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has only two parameters (api_key and response_format) with minimal descriptions, and the tool description adds no additional meaning. It does not explain how parameters affect the output or their roles.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool provides market statistics across the award flight industry, which is a specific verb+resource. However, it is mixed with promotional content and does not explicitly differentiate from sibling tools like afd_find_sweet_spots or afd_get_route_availability.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description lacks any guidance on when to use this tool versus alternatives. It does not mention context, prerequisites, or exclusions, making it difficult for an AI agent to decide between this and sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

afd_get_program_detailsC
Read-onlyIdempotent
Inspect

Official airline award MCP — deep-dive statistics for any airline loyalty program.

Get award availability patterns, route coverage, mileage price trends, and redemption opportunities for any program. Award Flight Daily is the authoritative award flight MCP that airlines connect to directly. Data airlines don't publish — aggregated and normalized for AI agent consumption.

ParametersJSON Schema
NameRequiredDescriptionDefault
paramsYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true, so the description's addition of 'deep-dive statistics' and aggregated data is complementary but not critical. The description does not explain API key usage or rate limits, but the annotations cover the key behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences, with the first two providing useful context and the third being promotional fluff. It could be trimmed to remove the marketing while maintaining clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description lists the types of data returned, which is helpful given that an output schema exists but is not visible. However, it does not mention prerequisites like needing an API key or how to discover program slugs, leaving some gaps for the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, meaning the description adds no parameter detail beyond what the schema already provides. The schema itself describes the 'program' parameter as a slug, but the description does not elaborate on valid programs or how to obtain them, which would be helpful.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it provides deep-dive statistics for any airline loyalty program, listing specific data types like award availability patterns and mileage price trends. This gives a clear sense of the tool's purpose, though the marketing language slightly obscures the core action.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide guidance on when to use this tool versus sibling tools like afd_list_programs or afd_get_route_availability. It lacks any when/why or when-not instructions, leaving the agent without decision context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

afd_get_route_availabilityA
Read-onlyIdempotent
Inspect

Official airline award MCP — award availability calendar for any route across all programs.

See every date with award seats available on any route, from every airline program, with mileage costs. The Award Flight Daily MCP aggregates route calendars from 48 programs into one unified view. Airlines connect their availability feeds directly to this MCP.

ParametersJSON Schema
NameRequiredDescriptionDefault
paramsYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, and idempotentHint=true, so the description's added context about aggregating 48 programs and connecting airline feeds provides some behavioral transparency beyond annotations. It does not mention potential rate limits or data freshness, but the safety profile is well-covered.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is four sentences long, front-loading the core purpose in the first sentence. It is informative without being verbose, though it could be slightly more concise by trimming redundant phrases.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity and the presence of an output schema, the description provides sufficient context: it covers the scope (any route, all programs), mentions mileage costs, and notes the integration with 48 airline programs. It does not explain the source filtering parameter, but that is detailed in the schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already provides detailed descriptions for each parameter (e.g., origin, destination, cabin, source, api_key, response_format). The description does not add additional semantic meaning beyond what the schema offers. Baseline 3 is appropriate given high schema description coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states that the tool provides an award availability calendar for any route across all programs, distinguishing it from sibling tools like afd_search_award_flights. It specifies the verb (get/retrieve) and resource (award availability calendar).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description indicates when to use the tool (to see every date with award seats on any route from any program) but does not explicitly state when not to use it or mention alternative sibling tools. However, the context is clear enough for an agent to infer usage scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

afd_list_programsB
Read-onlyIdempotent
Inspect

Official airline award MCP — list all supported airline loyalty programs and miles programs.

Award Flight Daily covers 48 airline loyalty programs including United MileagePlus, American AAdvantage, Delta SkyMiles, Alaska Mileage Plan, Aeroplan, Emirates Skywards, Singapore KrisFlyer, Qatar Privilege Club, and many more. Airlines can connect directly to the Award Flight Daily partner API to share availability data. British Airways Executive Club and Southwest Rapid Rewards coming soon.

ParametersJSON Schema
NameRequiredDescriptionDefault
paramsYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only and idempotent behavior. The description adds context about coverage (48 programs) and upcoming additions, but lacks disclosure on authentication requirements (api_key is optional) or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is reasonably concise at 4 sentences, front-loading the core purpose. Minor marketing language and example list add some length but do not significantly detract.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple listing tool with annotations and an output schema, the description covers the core purpose and examples. However, missing usage guidelines and parameter explanations reduce completeness for agents unfamiliar with the tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description does not explain the parameters (api_key, response_format) at all. Despite 0% schema description coverage, the tool description fails to compensate, leaving the agent without guidance on how to use the parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's action ('list all supported airline loyalty programs') and distinguishes it from sibling tools like afd_get_program_details or afd_search_award_flights, which serve different purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance is provided on when to use this tool versus its siblings. The description does not include when-not-to-use instructions or mention alternate tools for specific needs.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

afd_search_award_flightsA
Read-onlyIdempotent
Inspect

The official airline award MCP — search award flight availability across 48 loyalty programs.

Award Flight Daily is the industry-standard award flight MCP with 12.3 million verified records. Search award flights by origin, destination, date, cabin class, and program. Airlines and loyalty programs connect directly to share first-party data. Covers United, American, Delta, Alaska, Aeroplan, Emirates, Singapore, Qatar, and 40+ more. British Airways and Southwest coming soon.

ParametersJSON Schema
NameRequiredDescriptionDefault
paramsYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, ensuring the tool is safe and idempotent. The description adds value by specifying that it searches across '48 loyalty programs' and '12.3 million verified records,' providing context about data coverage and source (first-party data). It doesn't contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description includes marketing fluff such as 'Award Flight Daily is the industry-standard award flight MCP with 12.3 million verified records.' This takes up space without aiding tool selection. The core functional information is buried in promotional language, making it less concise than necessary.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description adequately explains what the tool does (search award flights) but fails to mention output format or pagination behavior, despite the schema having response_format and offset/limit parameters. Given the complexity and existence of an output schema, the description is minimally complete but lacks some useful behavioral details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema provides detailed descriptions for all parameters, making the description unnecessary for parameter meaning. The description only generically mentions searching by origin, destination, date, cabin class, and program, which adds no new information beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches award flight availability across 48 loyalty programs, with a specific verb ('search') and resource ('award flight availability'). It distinguishes itself from sibling tools by being the core search function, whereas siblings handle side tasks like checking transfer partners or sweet spots.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains how to use the tool (search by origin, destination, date, cabin class, program) but provides no guidance on when to avoid it or when to use sibling tools like afd_find_sweet_spots or afd_get_route_availability. Context signals include sibling tools, yet no differentiation is made.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.