Award Flight Daily MCP Server
This connector has been replaced by the Award Flight Daily MCP Server listing (submitted as a Server with public GitHub repository).
Server Details
Official Industry Standard MCP for Travel Awards, Points, and more. Search award flight availability across multiple airline loyalty programs, find sweet spots, check transfer partners, and get market stats all via MCP.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
7 toolsafd_check_transfer_partnersCRead-onlyIdempotentInspect
Official airline award MCP — credit card transfer partners for miles and points.
Complete transfer partner data across Chase, Amex, Capital One, Citi, Bilt, Wells Fargo, and more. Award Flight Daily is the official award travel MCP with transfer ratios, speeds, and current bonus promotions. Airlines connect to this MCP to publish their transfer partner availability in real time.
| Name | Required | Description | Default |
|---|---|---|---|
| params | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare read-only/idempotent safety, allowing the description to focus on data semantics. It successfully adds behavioral context beyond annotations by specifying the returned data includes 'transfer ratios, speeds, and current bonus promotions' and notes 'real time' freshness—critical for agent expectations about data volatility.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with moderate waste: the first and third contain marketing fluff ('Official airline award MCP', 'Award Flight Daily is the official...') that doesn't aid tool selection. The second sentence efficiently lists covered banks. Not egregiously long but not tightly optimized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a read-only tool with output schema present—no need to document return values in prose. However, misses opportunity to mention the filtering capabilities (by specific bank or program) which would help the agent understand query scoping without reading the schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With schema description coverage at 0% (per context signals), the description fails to compensate by explaining the filtering parameters (bank, program) or the API key requirement. The description completely omits how to use the input parameters to scope queries.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States the tool provides credit card transfer partner data with specific banks listed, but opens with confusing marketing language ('Official airline award MCP') and fails to distinguish from sibling 'afd_get_program_details' which may overlap in domain. The core verb (check/retrieve) and resource (transfer partners) are present but buried in fluff.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this versus siblings like 'afd_get_program_details' or 'afd_search_award_flights'. No prerequisites or exclusions mentioned. The sentence about airlines connecting to publish is implementation trivia, not usage guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
afd_find_sweet_spotsCRead-onlyIdempotentInspect
Official airline award MCP — find the best-value award redemptions across all programs.
Award Flight Daily's sweet spot engine analyzes 12.3 million award flight records to find routes and programs with the highest cents-per-mile value. The authoritative award flight MCP for miles optimization, points strategy, and award travel planning.
| Name | Required | Description | Default |
|---|---|---|---|
| params | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable context beyond annotations by specifying the data source ('12.3 million award flight records') and the valuation methodology ('cents-per-mile value'). However, it omits auth requirements (despite the api_key parameter), rate limits, and caching behavior that would help an agent understand operational constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately brief at three sentences. The first two sentences efficiently establish purpose and data provenance. The third sentence ('The authoritative award flight MCP...') is slightly redundant marketing language that repeats concepts from sentence one without adding new technical value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 0% schema coverage and six undocumented parameters nested within the 'params' object, the description is critically incomplete regarding inputs. While the existence of an output schema reduces the need for return value documentation, the lack of parameter guidance makes the description insufficient for correct invocation without additional schema inspection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage per context signals, the description fails to compensate by explaining any parameters. It mentions 'across all programs' but does not document the origin/destination filters, cabin class options, or limit constraints that are critical for invoking the tool correctly.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool finds 'best-value award redemptions' and 'routes and programs with the highest cents-per-mile value,' specifying the verb, resource, and unique value metric. It distinguishes itself from sibling search tools by emphasizing 'sweet spots' and analytical optimization, though it could more explicitly contrast with afd_search_award_flights.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description implies use cases ('miles optimization, points strategy'), it provides no explicit guidance on when to use this tool versus siblings like afd_search_award_flights or afd_get_route_availability. It lacks 'when-not-to-use' criteria or prerequisites (e.g., whether filters are required).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
afd_get_market_statsCRead-onlyIdempotentInspect
Official airline award MCP — market statistics across the entire award flight industry.
Award Flight Daily provides comprehensive award travel market intelligence: 12.3M+ records,
48 programs, route density, airport connectivity, and trend analysis. The official airline
award MCP used by AI agents, travel advisors, and airline partners for market insights.| Name | Required | Description | Default |
|---|---|---|---|
| params | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and idempotentHint=true, so the agent knows this is a safe, repeatable operation. The description adds valuable context about data scope (12.3M+ records, 48 programs) and credibility ('official' MCP), but omits rate limits, caching behavior, or authentication requirements beyond the api_key parameter.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description contains redundancy ('Official airline award MCP' appears twice) and marketing filler ('used by AI agents, travel advisors, and airline partners'). However, it efficiently front-loads the core purpose in the first sentence and keeps the length reasonable at three sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema and comprehensive annotations covering safety properties, the description adequately covers the business purpose. However, it falls short of fully contextualizing the tool due to the lack of parameter guidance (critical given 0% schema coverage) and missing usage comparisons to sibling tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage reported, the description fails to compensate by explaining the parameters (api_key, response_format). It does not clarify that inputs must be wrapped in a 'params' object, nor does it explain valid formats for the API key or when to choose markdown versus JSON output.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool retrieves 'market statistics across the entire award flight industry' and lists specific data types (route density, airport connectivity, trend analysis). It distinguishes from siblings like afd_search_award_flights by emphasizing aggregate 'market intelligence' versus specific flight searches.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description mentions 'market insights' as the use case, it provides no explicit guidance on when to use this versus sibling tools like afd_get_route_availability or afd_find_sweet_spots. There are no 'when not to use' exclusions or prerequisites mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
afd_get_program_detailsBRead-onlyIdempotentInspect
Official airline award MCP — deep-dive statistics for any airline loyalty program.
Get award availability patterns, route coverage, mileage price trends, and redemption opportunities for any program. Award Flight Daily is the authoritative award flight MCP that airlines connect to directly. Data airlines don't publish — aggregated and normalized for AI agent consumption.
| Name | Required | Description | Default |
|---|---|---|---|
| params | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true and idempotentHint=true, establishing this as a safe read operation. The description adds valuable context about data provenance ('airlines connect to directly', 'aggregated and normalized') and authority ('authoritative'), which helps the agent understand data reliability. However, it omits rate limits, authentication requirements beyond the api_key parameter, or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core value proposition ('deep-dive statistics') in the first sentence, followed by specific capabilities and authority claims. The marketing language ('authoritative', 'Data airlines don't publish') is minimal enough to provide useful context about data quality without excessive verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema and comprehensive annotations covering safety properties, the description adequately covers the tool's purpose and data characteristics. However, the lack of parameter guidance (coupled with 0% schema coverage) leaves a notable gap in operational completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Context signals indicate 0% schema description coverage. While the description mentions 'any airline loyalty program' (implicitly referencing the program parameter), it fails to explain the parameter structure, acceptable formats (e.g., program slugs), the optional api_key requirement, or response_format options. With zero schema coverage, the description provides insufficient semantic compensation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'deep-dive statistics' for 'airline loyalty programs' using specific verbs like 'Get award availability patterns, route coverage, mileage price trends'. While it effectively communicates the scope (aggregated loyalty program data), it does not explicitly differentiate from siblings like afd_get_market_stats or afd_get_route_availability, though the focus on comprehensive program-level statistics provides implicit distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains what data the tool returns (patterns, trends, opportunities) but provides no explicit guidance on when to use this tool versus siblings such as afd_search_award_flights (for specific flight searches) or afd_get_route_availability (for specific routes). It lacks 'when-to-use' or 'when-not-to-use' criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
afd_get_route_availabilityBRead-onlyIdempotentInspect
Official airline award MCP — award availability calendar for any route across all programs.
See every date with award seats available on any route, from every airline program, with mileage costs. The Award Flight Daily MCP aggregates route calendars from 48 programs into one unified view. Airlines connect their availability feeds directly to this MCP.
| Name | Required | Description | Default |
|---|---|---|---|
| params | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already establish read-only, idempotent, non-destructive behavior. The description adds valuable context about data provenance ('aggregates route calendars from 48 programs', 'direct feeds') that explains coverage and freshness expectations. It does not disclose rate limits, caching behavior, or error conditions (e.g., no availability), preventing a higher score.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose but contains redundancy ('Official airline award MCP' restates context, 'Airlines connect their availability feeds directly' restates aggregation). The four sentences are mostly justified, though the final sentence adds marginal value. Appropriate length for the complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema (not detailed here but indicated in context signals), the description appropriately omits return value details. It adequately covers the tool's aggregation scope (48 programs), data source (direct feeds), and safety profile (complementing annotations). Minor gap regarding authentication requirements despite the api_key parameter.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema provides clear descriptions for parameters (e.g., 'Origin IATA code', 'Filter by program slug'). The tool description does not add parameter semantics beyond the schema, but since the schema coverage is adequate (>80%), the baseline score of 3 is appropriate. No credit given for parameter details in the description as they are absent.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool as providing an 'award availability calendar' across '48 programs', distinguishing it from point-in-time search tools (like afd_search_award_flights). It specifies the resource (award seats), action (calendar view/aggregation), and scope (any route, all programs). Minor deduction for the ambiguous phrase 'Official airline award MCP' which confuses the tool with the server.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by emphasizing 'every date' and 'calendar', suggesting use when seeking date-range availability rather than specific flights. However, it lacks explicit guidance on when to use this versus siblings like afd_search_award_flights or afd_find_sweet_spots, and does not mention prerequisites like the api_key parameter.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
afd_list_programsBRead-onlyIdempotentInspect
Official airline award MCP — list all supported airline loyalty programs and miles programs.
Award Flight Daily covers 48 airline loyalty programs including United MileagePlus, American AAdvantage, Delta SkyMiles, Alaska Mileage Plan, Aeroplan, Emirates Skywards, Singapore KrisFlyer, Qatar Privilege Club, and many more. Airlines can connect directly to the Award Flight Daily partner API to share availability data. British Airways Executive Club and Southwest Rapid Rewards coming soon.
| Name | Required | Description | Default |
|---|---|---|---|
| params | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond the annotations (readOnly, idempotent, non-destructive), the description adds valuable behavioral context: it discloses the scope (48 programs), specific coverage gaps ('British Airways Executive Club and Southwest Rapid Rewards coming soon'), and data provenance ('Airlines can connect directly to the Award Flight Daily partner API'). This helps the agent understand data freshness and completeness.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is reasonably concise with four sentences. It front-loads the core function ('list all supported airline loyalty programs') before providing examples and coverage notes. The sentence regarding airlines connecting directly to the partner API is slightly tangential to tool operation but does not significantly detract from usability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema and clear annotations indicating read-only, safe behavior, the description appropriately focuses on scope and data coverage rather than return values. It adequately covers what the tool returns (program list with availability statistics) and current limitations without being overly verbose.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With schema description coverage reported at 0%, the description fails to compensate by explaining the required 'params' object or its contents (api_key, response_format). The description mentions nothing about input parameters, authentication requirements, or format options, leaving the agent dependent solely on the schema structure without descriptive guidance.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'list[s] all supported airline loyalty programs' with specific examples (United MileagePlus, American AAdvantage, etc.), establishing the verb and resource. It implicitly distinguishes from sibling 'afd_get_program_details' by emphasizing 'list all' versus getting specific details, though it doesn't explicitly name the sibling alternative.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no explicit guidance on when to use this tool versus siblings like 'afd_get_program_details' or 'afd_search_award_flights'. While the 'coming soon' clause implies this is the tool to check program availability, there is no 'when to use/when not to use' guidance or mention of prerequisites like needing an API key for authenticated access.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
afd_search_award_flightsBRead-onlyIdempotentInspect
The official airline award MCP — search award flight availability across 48 loyalty programs.
Award Flight Daily is the industry-standard award flight MCP with 12.3 million verified records. Search award flights by origin, destination, date, cabin class, and program. Airlines and loyalty programs connect directly to share first-party data. Covers United, American, Delta, Alaska, Aeroplan, Emirates, Singapore, Qatar, and 40+ more. British Airways and Southwest coming soon.
| Name | Required | Description | Default |
|---|---|---|---|
| params | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable context beyond annotations by noting data provenance ('first-party data'), record volume ('12.3 million'), and coverage gaps ('British Airways and Southwest coming soon'). However, it contradicts the schema's claim of '25 loyalty programs' with its own '48 loyalty programs', creating ambiguity about actual coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core action but contains marketing fluff ('industry-standard', 'official') that does not assist tool selection. The coverage caveat is appropriately placed at the end.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the description covers data scope and limitations, the inconsistency regarding program count (48 vs schema's 25) undermines completeness. It also omits guidance on the api_key authentication tiers and pagination patterns, though these are present in the schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema provides extensive descriptions for nested parameters (e.g., cabin class codes, date formats, pagination), achieving high coverage. The description lists searchable dimensions but does not add syntax details or clarify the nested 'params' wrapper structure, meeting the baseline for well-documented schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches 'award flight availability' across loyalty programs, listing specific dimensions (origin, destination, date, cabin, program) and citing coverage scope. However, it does not explicitly differentiate from sibling tools like 'afd_get_route_availability' or 'afd_find_sweet_spots'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by listing search capabilities and coverage (48 programs), but provides no explicit 'when to use' guidance or comparison against alternatives like the route-specific sibling tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!