Skip to main content
Glama

flightoracle

Server Details

Flight Intelligence MCP — search, cheapest dates, multi-city, airline compare via Google Flights

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
ToolOracle/flightoracle
GitHub Stars
0
Server Listing
FlightOracle

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.1/5 across 8 of 8 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, but flight_search and one_way_search overlap significantly, as one_way_search is essentially a subset of flight_search's functionality. This could cause confusion for agents trying to choose between them. Other tools like cheapest_flights, price_calendar, and route_compare serve clearly different roles in flight pricing analysis.

Naming Consistency4/5

Tool names follow a consistent snake_case pattern throughout, which is good. However, there is a mix of verb_noun (e.g., flight_search) and noun-only (e.g., price_calendar) naming styles, which slightly reduces predictability. The naming is still readable and mostly coherent, with only minor deviations from a strict convention.

Tool Count5/5

With 8 tools, the count is well-scoped for a flight search and pricing server. Each tool appears to serve a specific purpose in the domain, such as searching flights, comparing routes, or checking prices, without feeling overly bloated or sparse. This number allows for comprehensive coverage while remaining manageable for agents.

Completeness4/5

The toolset covers core flight search and pricing functionalities well, including one-way, round-trip, multi-city searches, price comparisons, and calendars. A minor gap is the lack of tools for booking or managing reservations, but this is reasonable if the server focuses on search and insights. The health_check tool also ensures operational awareness, adding to the completeness.

Available Tools

8 tools
airport_hubCInspect

Find airport IATA codes for a city. Returns nearby airports and common codes.

ParametersJSON Schema
NameRequiredDescriptionDefault
cityNoCity name (e.g., 'London', 'New York', 'Tokyo')
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. While it mentions what the tool returns ('nearby airports and common codes'), it lacks critical details such as whether this is a read-only operation, potential rate limits, error conditions, or how 'nearby' is defined. The description is insufficient for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two sentences that directly state the purpose and return value. It's front-loaded with the main function. However, the second sentence could be more integrated, and there's slight room for improvement in flow.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (single parameter, no output schema, no annotations), the description is minimally adequate. It covers the basic purpose and return scope but lacks details on behavioral traits, usage context, and output structure, which are needed for full completeness in the absence of annotations and output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'city' parameter clearly documented. The description adds no additional parameter semantics beyond what the schema provides, such as format examples or edge cases. With high schema coverage, the baseline score of 3 is appropriate as the description doesn't compensate but also doesn't detract.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Find airport IATA codes for a city' specifies the verb (find) and resource (airport IATA codes), and 'Returns nearby airports and common codes' adds useful detail about the scope of results. However, it doesn't explicitly differentiate this from sibling tools like 'flight_search' or 'route_compare' which might also involve airports.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With sibling tools like 'cheapest_flights', 'flight_search', and 'route_compare' available, there's no indication whether this tool is for lookup purposes versus actual flight operations, or any prerequisites for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cheapest_flightsBInspect

Find the cheapest flights across a full month. Returns price-sorted options with dates, airlines, and price insights.

ParametersJSON Schema
NameRequiredDescriptionDefault
monthNoMonth to search YYYY-MM (e.g., 2026-06)
arrivalNoArrival airport IATA code
countryNoCountry code (default: us)
currencyNoCurrency code (default: USD)
departureNoDeparture airport IATA code
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool returns 'price-sorted options' but doesn't specify behavioral traits like rate limits, authentication requirements, error handling, or whether it's a read-only operation. The description implies it's a query tool but lacks details on performance, data freshness, or any side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded with essential information in just two sentences. The first sentence states the core functionality, and the second describes the return format. There's no wasted language, and every word contributes to understanding the tool's purpose and output.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (5 parameters, no output schema, no annotations), the description is minimally adequate. It covers the basic purpose and return format but lacks details about behavioral aspects, usage context, and deeper parameter semantics. Without annotations or output schema, the description should do more to compensate, but it only meets the minimum viable threshold.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so all parameters are documented in the schema. The description doesn't add any parameter-specific information beyond what's in the schema (e.g., it doesn't explain parameter interactions or provide examples). The baseline score of 3 is appropriate since the schema does the heavy lifting, though the description could have added context about how parameters like 'country' and 'currency' affect results.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Find the cheapest flights across a full month' with specific details about what it returns ('price-sorted options with dates, airlines, and price insights'). It distinguishes itself from siblings like 'flight_search' or 'one_way_search' by emphasizing month-wide search and price sorting. However, it doesn't explicitly differentiate from 'price_calendar' which might offer similar functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when to choose it over siblings like 'flight_search' (which might offer different filtering) or 'price_calendar' (which could be similar). There's no information about prerequisites, constraints, or typical use cases beyond the basic functionality stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

health_checkCInspect

Server status, API connectivity, supported features.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions checking 'server status, API connectivity, supported features,' which suggests a read-only, diagnostic operation, but fails to detail response format, error conditions, rate limits, or authentication needs. For a tool with zero annotation coverage, this is insufficient behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded with three key phrases separated by commas, making it easy to scan. However, it could be slightly more structured by using complete sentences or clarifying the relationship between the listed aspects, but it efficiently conveys the core purpose without waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, no output schema, no annotations), the description is minimally adequate. It covers the basic purpose but lacks details on return values, error handling, or integration with sibling tools. For a health check tool, more context on expected outputs or usage patterns would enhance completeness, but it meets the minimum viable threshold.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema description coverage, so no parameter documentation is needed. The description appropriately doesn't discuss parameters, aligning with the schema. A baseline score of 4 is applied as it correctly avoids redundant information, though it doesn't add value beyond the schema in this dimension.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Server status, API connectivity, supported features' states what the tool does at a high level but lacks a specific verb and doesn't distinguish from siblings. It indicates the tool checks system health aspects rather than performing flight-related operations like its siblings, but the purpose remains somewhat vague without explicit action verbs.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance is provided on when to use this tool versus alternatives. The description implies usage for monitoring or diagnostic contexts, but it doesn't specify prerequisites, exclusions, or compare it to other tools. This leaves the agent without clear direction on appropriate invocation scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

multi_cityAInspect

Price a multi-city route with 2+ legs. Returns per-leg pricing and total cost range.

ParametersJSON Schema
NameRequiredDescriptionDefault
legsNoList of legs: [{departure, arrival, date}, ...] (min 2)
countryNoCountry (default: us)
currencyNoCurrency (default: USD)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses the return format ('per-leg pricing and total cost range'), which is useful, but lacks details on behavioral traits such as rate limits, error handling, authentication needs, or whether it's a read-only operation. For a pricing tool with no annotations, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence and adds output details in the second. Both sentences earn their place by providing essential information without waste, making it appropriately sized and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a multi-leg pricing tool with no annotations and no output schema, the description is minimally adequate. It covers the purpose and output format but lacks details on behavioral aspects and deeper context. With 100% schema coverage, it meets basic needs but could be more complete for effective agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters (legs, country, currency) with descriptions. The description adds no additional parameter semantics beyond what the schema provides, such as format details for legs or default behaviors. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Price a multi-city route') and resource ('route with 2+ legs'), distinguishing it from siblings like 'one_way_search' or 'cheapest_flights' by specifying the multi-leg requirement. It also mentions the output ('Returns per-leg pricing and total cost range'), making the purpose explicit and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by stating '2+ legs', which suggests when to use this tool (for multi-city routes) versus alternatives like 'one_way_search' for single legs. However, it does not explicitly name alternatives or provide exclusions, leaving some ambiguity in when-not-to-use scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

price_calendarCInspect

Weekly price calendar for a route. Find the cheapest week to fly.

ParametersJSON Schema
NameRequiredDescriptionDefault
weeksNoNumber of weeks to scan (1-8, default: 4)
arrivalNoArrival IATA code
countryNoCountry (default: us)
currencyNoCurrency (default: USD)
departureNoDeparture IATA code
start_dateNoStart date YYYY-MM-DD
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the tool finds the 'cheapest week to fly', implying a read-only, non-destructive operation, but doesn't specify details like rate limits, authentication needs, error conditions, or what the output looks like. For a tool with no annotations, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded: two sentences that directly state the tool's purpose without unnecessary details. Every word earns its place, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 parameters, no output schema, no annotations), the description is incomplete. It doesn't cover output format, error handling, or practical usage scenarios. While concise, it fails to provide enough context for an agent to use the tool effectively without additional inference or trial-and-error.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds minimal semantic context beyond the input schema. It implies parameters like 'route' (via departure/arrival) and time scope, but doesn't explain relationships between parameters (e.g., how start_date and weeks interact) or provide usage examples. With 100% schema description coverage, the baseline is 3, and the description doesn't significantly enhance parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Weekly price calendar for a route. Find the cheapest week to fly.' It specifies the verb ('find') and resource ('cheapest week to fly'), but doesn't explicitly differentiate from sibling tools like 'cheapest_flights' or 'flight_search', which likely serve similar purposes. The description is clear but lacks sibling differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools or contexts where this tool is preferred over others like 'cheapest_flights' or 'flight_search'. There's no indication of prerequisites, exclusions, or comparative advantages.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

route_compareCInspect

Compare flight prices by airline for a route. Shows cheapest per airline, stops, duration.

ParametersJSON Schema
NameRequiredDescriptionDefault
dateNoDate YYYY-MM-DD
arrivalNoArrival IATA code
countryNoCountry (default: us)
currencyNoCurrency (default: USD)
departureNoDeparture IATA code
return_dateNoReturn date (optional)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It mentions what the tool shows (cheapest per airline, stops, duration) but doesn't disclose behavioral traits like whether it requires authentication, rate limits, pagination, error conditions, or what format the output takes. For a tool with no annotations and no output schema, this leaves significant gaps in understanding how it behaves.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise—a single sentence that efficiently conveys the core functionality. Every word earns its place: 'Compare flight prices by airline for a route' establishes the purpose, and 'Shows cheapest per airline, stops, duration' adds valuable detail about what's displayed. No wasted words or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, no output schema, and 6 parameters, the description is incomplete. While it states what the tool does, it doesn't cover important contextual aspects like output format, error handling, authentication needs, or how results are structured. For a comparison tool with multiple parameters, users need more information about what to expect from the tool's behavior and results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 6 parameters with their types and basic descriptions. The description adds no additional parameter semantics beyond what's in the schema—it doesn't explain relationships between parameters (e.g., that 'return_date' makes it a round-trip search) or provide examples. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Compare flight prices by airline for a route' with specific details about what it shows (cheapest per airline, stops, duration). It distinguishes from siblings like 'cheapest_flights' by focusing on airline comparison rather than just finding cheapest options, but doesn't explicitly contrast with all alternatives like 'flight_search' or 'multi_city'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided about when to use this tool versus alternatives like 'cheapest_flights', 'flight_search', or 'multi_city'. The description implies it's for comparing airline prices on a route, but doesn't specify scenarios where this is preferred over other flight search tools or mention any prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.