flights
Server Details
Flights MCP — wraps OpenSky Network API (free, no auth required)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-flights
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.7/5 across 4 of 4 tools scored.
Each tool has a clearly distinct purpose with no overlap: get_aircraft tracks a specific aircraft, get_arrivals and get_departures handle airport-specific flight movements, and get_flights_in_area covers a geographic area. The descriptions clearly differentiate the scope and required parameters, making tool selection unambiguous.
All tool names follow a consistent verb_noun pattern using snake_case (e.g., get_aircraft, get_arrivals, get_departures, get_flights_in_area). The naming convention is uniform throughout, with 'get' as the verb and descriptive nouns, making the set predictable and easy to understand.
With 4 tools, the count is reasonable for a flight tracking server, covering key operations like tracking specific aircraft, airport arrivals/departures, and area monitoring. However, it feels slightly thin as it lacks tools for operations like searching flights by route or getting flight schedules, which could enhance completeness.
The tool set covers basic flight tracking needs but has notable gaps. It provides read-only operations for current and historical data but lacks CRUD capabilities (e.g., no create, update, or delete tools, which may be intentional for this domain). Missing tools for flight schedules, route searches, or airline-specific data limit the surface's coverage, though agents can work around this with the existing tools.
Available Tools
4 toolsget_aircraftAInspect
Track a specific aircraft by its ICAO24 transponder address (e.g. "a0b1c2"). Returns current position, velocity, altitude, and heading.
| Name | Required | Description | Default |
|---|---|---|---|
| icao24 | Yes | ICAO24 transponder address (6 hex characters, e.g. "a0b1c2") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the return data (current position, velocity, altitude, heading), which is useful, but lacks details on error handling, rate limits, or data freshness, leaving gaps for a tool with no annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, consisting of two efficient sentences that directly state the tool's purpose and return values without any wasted words, making it easy to understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is moderately complete: it covers the purpose and return data but lacks details on behavioral aspects like errors or limitations. For a simple query tool with one parameter, it's adequate but could be more comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents the single parameter (icao24). The description adds minimal value by mentioning the parameter in context but does not provide additional syntax or format details beyond what the schema specifies, meeting the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Track') and resource ('a specific aircraft by its ICAO24 transponder address'), distinguishing it from sibling tools like get_arrivals, get_departures, and get_flights_in_area which focus on different scopes (arrivals, departures, area-based flights).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool (to track a specific aircraft by ICAO24 address), but does not explicitly state when not to use it or name alternatives among the sibling tools, such as using get_flights_in_area for broader queries instead.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_arrivalsAInspect
Get flights that arrived at an airport within a time range. Requires an ICAO airport code and Unix timestamps.
| Name | Required | Description | Default |
|---|---|---|---|
| end | Yes | End of time range as Unix timestamp (seconds, max 7 days after begin) | |
| begin | Yes | Start of time range as Unix timestamp (seconds) | |
| airport | Yes | ICAO airport code (e.g. "KLAX", "EGLL") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions the requirement for ICAO airport code and Unix timestamps, which is useful. However, it doesn't disclose behavioral traits like rate limits, authentication needs, pagination, error conditions, or what the return format looks like (especially important since there's no output schema). For a read operation with no annotation coverage, this is a significant gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose ('Get flights that arrived at an airport within a time range') and follows with essential constraints. Every word earns its place with no redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (3 required parameters, no annotations, no output schema), the description is incomplete. It doesn't explain what the return values look like (e.g., list of flights with details), error handling, or other behavioral aspects needed for effective use. The description alone is insufficient for an agent to fully understand how to interpret results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents all three parameters (airport, begin, end). The description adds minimal value beyond the schema by mentioning that airport requires an ICAO code and timestamps are Unix-based, but doesn't provide additional syntax or format details. This meets the baseline of 3 when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get flights that arrived'), the resource ('at an airport'), and scope ('within a time range'). It distinguishes from sibling tools like 'get_departures' by specifying arrivals only, and from 'get_flights_in_area' by focusing on a specific airport rather than a geographic area.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool (for arrivals at a specific airport within a time range) and implicitly distinguishes it from siblings like 'get_departures' (which handles departures) and 'get_flights_in_area' (which handles geographic areas). However, it doesn't explicitly state when NOT to use it or name alternatives, keeping it at a 4.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_departuresAInspect
Get flights that departed from an airport within a time range. Requires an ICAO airport code and Unix timestamps.
| Name | Required | Description | Default |
|---|---|---|---|
| end | Yes | End of time range as Unix timestamp (seconds, max 7 days after begin) | |
| begin | Yes | Start of time range as Unix timestamp (seconds) | |
| airport | Yes | ICAO airport code (e.g. "KLAX", "EGLL") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the required inputs and time constraints, but does not cover aspects like rate limits, authentication needs, error handling, or the format of returned data. It adequately describes the core operation but lacks deeper behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the purpose and key requirements without any wasted words. It is appropriately sized for a tool with three well-documented parameters and no complex behavioral traits to explain.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (three required parameters, no output schema, and no annotations), the description is minimally complete. It covers the purpose and inputs but does not address output format, error cases, or limitations (e.g., data availability, max time range). It meets basic needs but leaves gaps for an agent to infer behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the input schema already documents all parameters thoroughly. The description adds minimal value beyond the schema by reiterating the need for an ICAO airport code and Unix timestamps, but does not provide additional syntax, format details, or usage nuances. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get flights that departed'), the resource ('from an airport'), and the scope ('within a time range'). It distinguishes from siblings like 'get_arrivals' (departures vs arrivals) and 'get_flights_in_area' (airport-specific vs area-based).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context by specifying the required inputs (ICAO airport code and Unix timestamps) and the time-bound nature of the query. However, it does not explicitly state when to use this tool versus alternatives like 'get_arrivals' or 'get_flights_in_area', nor does it mention any exclusions or prerequisites beyond the parameters.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_flights_in_areaAInspect
Get all aircraft currently in a geographic bounding box. Returns icao24, callsign, origin country, position, altitude, velocity, and heading for each aircraft.
| Name | Required | Description | Default |
|---|---|---|---|
| lamax | Yes | Maximum latitude of the bounding box (degrees) | |
| lamin | Yes | Minimum latitude of the bounding box (degrees) | |
| lomax | Yes | Maximum longitude of the bounding box (degrees) | |
| lomin | Yes | Minimum longitude of the bounding box (degrees) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the return data fields (e.g., icao24, position) but omits critical behavioral traits such as data freshness (real-time vs. delayed), rate limits, authentication needs, error handling, or whether it's a read-only operation. For a tool with no annotations, this leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence and efficiently lists return fields in the second. Every sentence earns its place by providing essential information without redundancy, making it appropriately sized and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations, no output schema, and 100% schema coverage, the description is moderately complete: it clearly states the purpose and return data. However, it lacks details on behavioral aspects (e.g., real-time updates, errors) and output structure, which are important for a tool querying dynamic data like aircraft positions, leaving room for improvement in context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with all parameters (lamin, lomin, lamax, lomax) clearly documented in the schema as bounding box coordinates. The description adds no additional parameter semantics beyond implying geographic filtering, so it meets the baseline of 3 where the schema does the heavy lifting without extra value from the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get all aircraft currently in a geographic bounding box') and resource ('aircraft'), distinguishing it from siblings like get_aircraft, get_arrivals, and get_departures by specifying geographic filtering rather than general, arrival-specific, or departure-specific queries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving aircraft within a bounding box, but provides no explicit guidance on when to use this tool versus alternatives like get_aircraft (which might not filter geographically) or get_arrivals/departures. It lacks clear exclusions or prerequisites, leaving usage context inferred rather than stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!