flightoracle
Server Details
Flight Intelligence MCP — search, cheapest dates, multi-city, airline compare via Google Flights
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- ToolOracle/flightoracle
- GitHub Stars
- 0
- Server Listing
- FlightOracle
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.1/5 across 8 of 8 tools scored.
Most tools have distinct purposes, but flight_search and one_way_search overlap significantly, as one_way_search is essentially a subset of flight_search's functionality. This could cause confusion for agents trying to choose between them. Other tools like cheapest_flights, price_calendar, and route_compare serve clearly different roles in flight pricing analysis.
Tool names follow a consistent snake_case pattern throughout, which is good. However, there is a mix of verb_noun (e.g., flight_search) and noun-only (e.g., price_calendar) naming styles, which slightly reduces predictability. The naming is still readable and mostly coherent, with only minor deviations from a strict convention.
With 8 tools, the count is well-scoped for a flight search and pricing server. Each tool appears to serve a specific purpose in the domain, such as searching flights, comparing routes, or checking prices, without feeling overly bloated or sparse. This number allows for comprehensive coverage while remaining manageable for agents.
The toolset covers core flight search and pricing functionalities well, including one-way, round-trip, multi-city searches, price comparisons, and calendars. A minor gap is the lack of tools for booking or managing reservations, but this is reasonable if the server focuses on search and insights. The health_check tool also ensures operational awareness, adding to the completeness.
Available Tools
8 toolsairport_hubCInspect
Find airport IATA codes for a city. Returns nearby airports and common codes.
| Name | Required | Description | Default |
|---|---|---|---|
| city | No | City name (e.g., 'London', 'New York', 'Tokyo') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. While it mentions what the tool returns ('nearby airports and common codes'), it lacks critical details such as whether this is a read-only operation, potential rate limits, error conditions, or how 'nearby' is defined. The description is insufficient for a tool with no annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two sentences that directly state the purpose and return value. It's front-loaded with the main function. However, the second sentence could be more integrated, and there's slight room for improvement in flow.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (single parameter, no output schema, no annotations), the description is minimally adequate. It covers the basic purpose and return scope but lacks details on behavioral traits, usage context, and output structure, which are needed for full completeness in the absence of annotations and output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the 'city' parameter clearly documented. The description adds no additional parameter semantics beyond what the schema provides, such as format examples or edge cases. With high schema coverage, the baseline score of 3 is appropriate as the description doesn't compensate but also doesn't detract.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Find airport IATA codes for a city' specifies the verb (find) and resource (airport IATA codes), and 'Returns nearby airports and common codes' adds useful detail about the scope of results. However, it doesn't explicitly differentiate this from sibling tools like 'flight_search' or 'route_compare' which might also involve airports.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With sibling tools like 'cheapest_flights', 'flight_search', and 'route_compare' available, there's no indication whether this tool is for lookup purposes versus actual flight operations, or any prerequisites for usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cheapest_flightsBInspect
Find the cheapest flights across a full month. Returns price-sorted options with dates, airlines, and price insights.
| Name | Required | Description | Default |
|---|---|---|---|
| month | No | Month to search YYYY-MM (e.g., 2026-06) | |
| arrival | No | Arrival airport IATA code | |
| country | No | Country code (default: us) | |
| currency | No | Currency code (default: USD) | |
| departure | No | Departure airport IATA code |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool returns 'price-sorted options' but doesn't specify behavioral traits like rate limits, authentication requirements, error handling, or whether it's a read-only operation. The description implies it's a query tool but lacks details on performance, data freshness, or any side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise and front-loaded with essential information in just two sentences. The first sentence states the core functionality, and the second describes the return format. There's no wasted language, and every word contributes to understanding the tool's purpose and output.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (5 parameters, no output schema, no annotations), the description is minimally adequate. It covers the basic purpose and return format but lacks details about behavioral aspects, usage context, and deeper parameter semantics. Without annotations or output schema, the description should do more to compensate, but it only meets the minimum viable threshold.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so all parameters are documented in the schema. The description doesn't add any parameter-specific information beyond what's in the schema (e.g., it doesn't explain parameter interactions or provide examples). The baseline score of 3 is appropriate since the schema does the heavy lifting, though the description could have added context about how parameters like 'country' and 'currency' affect results.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Find the cheapest flights across a full month' with specific details about what it returns ('price-sorted options with dates, airlines, and price insights'). It distinguishes itself from siblings like 'flight_search' or 'one_way_search' by emphasizing month-wide search and price sorting. However, it doesn't explicitly differentiate from 'price_calendar' which might offer similar functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when to choose it over siblings like 'flight_search' (which might offer different filtering) or 'price_calendar' (which could be similar). There's no information about prerequisites, constraints, or typical use cases beyond the basic functionality stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
flight_searchAInspect
Search round-trip or one-way flights between airports. Returns best flights, prices, airlines, duration, stops, and carbon emissions.
| Name | Required | Description | Default |
|---|---|---|---|
| date | No | Outbound date YYYY-MM-DD | |
| stops | No | 0=any, 1=nonstop only, 2=max 1 stop, 3=max 2 stops (default: 0) | |
| adults | No | Number of adult passengers (default: 1) | |
| arrival | No | Arrival airport IATA code (e.g., LHR, CDG, NRT) | |
| country | No | Country code for local pricing (default: us) | |
| currency | No | Currency code (default: USD) | |
| departure | No | Departure airport IATA code (e.g., JFK, LAX, FRA) | |
| return_date | No | Return date YYYY-MM-DD (omit for one-way) | |
| travel_class | No | 1=Economy, 2=Premium Economy, 3=Business, 4=First (default: 1) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions what information is returned (flights, prices, airlines, etc.) but lacks details on permissions, rate limits, error handling, or whether this is a read-only operation. For a search tool with zero annotation coverage, this is a significant gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose and lists return values without unnecessary elaboration. Every part of the description adds value, making it appropriately sized and well-structured for quick understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (9 parameters, no output schema, no annotations), the description is moderately complete. It covers the purpose and return types but lacks behavioral context and output details. Without an output schema, the agent must infer the return structure from the description's list, which is adequate but not fully comprehensive for a tool with many parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all 9 parameters thoroughly. The description doesn't add any parameter-specific semantics beyond what's in the schema (e.g., it doesn't explain parameter interactions or provide examples). Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Search round-trip or one-way flights') and resources ('between airports'), and distinguishes it from siblings by specifying the type of search (round-trip/one-way) and what information is returned. This differentiates it from tools like 'cheapest_flights' or 'one_way_search' which might have narrower scopes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by mentioning 'round-trip or one-way flights' and listing return types, but it doesn't explicitly state when to use this tool versus alternatives like 'cheapest_flights' or 'one_way_search'. No exclusions or prerequisites are provided, leaving the agent to infer context from the tool name and description alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
health_checkCInspect
Server status, API connectivity, supported features.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions checking 'server status, API connectivity, supported features,' which suggests a read-only, diagnostic operation, but fails to detail response format, error conditions, rate limits, or authentication needs. For a tool with zero annotation coverage, this is insufficient behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded with three key phrases separated by commas, making it easy to scan. However, it could be slightly more structured by using complete sentences or clarifying the relationship between the listed aspects, but it efficiently conveys the core purpose without waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no output schema, no annotations), the description is minimally adequate. It covers the basic purpose but lacks details on return values, error handling, or integration with sibling tools. For a health check tool, more context on expected outputs or usage patterns would enhance completeness, but it meets the minimum viable threshold.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so no parameter documentation is needed. The description appropriately doesn't discuss parameters, aligning with the schema. A baseline score of 4 is applied as it correctly avoids redundant information, though it doesn't add value beyond the schema in this dimension.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Server status, API connectivity, supported features' states what the tool does at a high level but lacks a specific verb and doesn't distinguish from siblings. It indicates the tool checks system health aspects rather than performing flight-related operations like its siblings, but the purpose remains somewhat vague without explicit action verbs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance is provided on when to use this tool versus alternatives. The description implies usage for monitoring or diagnostic contexts, but it doesn't specify prerequisites, exclusions, or compare it to other tools. This leaves the agent without clear direction on appropriate invocation scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
multi_cityAInspect
Price a multi-city route with 2+ legs. Returns per-leg pricing and total cost range.
| Name | Required | Description | Default |
|---|---|---|---|
| legs | No | List of legs: [{departure, arrival, date}, ...] (min 2) | |
| country | No | Country (default: us) | |
| currency | No | Currency (default: USD) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the return format ('per-leg pricing and total cost range'), which is useful, but lacks details on behavioral traits such as rate limits, error handling, authentication needs, or whether it's a read-only operation. For a pricing tool with no annotations, this is a significant gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence and adds output details in the second. Both sentences earn their place by providing essential information without waste, making it appropriately sized and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a multi-leg pricing tool with no annotations and no output schema, the description is minimally adequate. It covers the purpose and output format but lacks details on behavioral aspects and deeper context. With 100% schema coverage, it meets basic needs but could be more complete for effective agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters (legs, country, currency) with descriptions. The description adds no additional parameter semantics beyond what the schema provides, such as format details for legs or default behaviors. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Price a multi-city route') and resource ('route with 2+ legs'), distinguishing it from siblings like 'one_way_search' or 'cheapest_flights' by specifying the multi-leg requirement. It also mentions the output ('Returns per-leg pricing and total cost range'), making the purpose explicit and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by stating '2+ legs', which suggests when to use this tool (for multi-city routes) versus alternatives like 'one_way_search' for single legs. However, it does not explicitly name alternatives or provide exclusions, leaving some ambiguity in when-not-to-use scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
one_way_searchCInspect
Search one-way flights with all filters.
| Name | Required | Description | Default |
|---|---|---|---|
| date | No | Date YYYY-MM-DD | |
| stops | No | 0=any, 1=nonstop, 2=1stop, 3=2stops | |
| adults | No | Passengers (default: 1) | |
| arrival | No | Arrival IATA code | |
| country | No | Country (default: us) | |
| currency | No | Currency (default: USD) | |
| departure | No | Departure IATA code | |
| travel_class | No | 1=Economy, 2=Premium, 3=Business, 4=First |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'search' and 'all filters,' but doesn't describe key behaviors such as whether this is a read-only operation, potential rate limits, authentication needs, or what the output looks like (e.g., list of flights, pricing). For a search tool with 8 parameters and no annotations, this is a significant gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence: 'Search one-way flights with all filters.' It is front-loaded with the core purpose and wastes no words, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (8 parameters, no output schema, no annotations), the description is incomplete. It lacks details on behavioral traits, output format, and usage guidelines. While the schema covers parameters well, the description doesn't add enough context to help an agent understand how to effectively invoke and interpret results from this tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, meaning all parameters are documented in the schema with descriptions and defaults. The description adds no additional parameter semantics beyond implying 'all filters' are included, which is already covered by the schema. With high schema coverage, the baseline score is 3, as the description doesn't compensate but also doesn't detract.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose as 'Search one-way flights with all filters,' which specifies the verb (search) and resource (one-way flights). It distinguishes from siblings like 'multi_city' or 'cheapest_flights' by focusing on one-way flights, though it doesn't explicitly differentiate from 'flight_search' which might be similar. The purpose is clear but could be more specific about how it differs from other search tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions 'all filters' but doesn't specify contexts, prerequisites, or exclusions compared to siblings like 'cheapest_flights' or 'flight_search.' Without such guidance, an agent might struggle to choose the right tool among similar options.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
price_calendarCInspect
Weekly price calendar for a route. Find the cheapest week to fly.
| Name | Required | Description | Default |
|---|---|---|---|
| weeks | No | Number of weeks to scan (1-8, default: 4) | |
| arrival | No | Arrival IATA code | |
| country | No | Country (default: us) | |
| currency | No | Currency (default: USD) | |
| departure | No | Departure IATA code | |
| start_date | No | Start date YYYY-MM-DD |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the tool finds the 'cheapest week to fly', implying a read-only, non-destructive operation, but doesn't specify details like rate limits, authentication needs, error conditions, or what the output looks like. For a tool with no annotations, this leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise and front-loaded: two sentences that directly state the tool's purpose without unnecessary details. Every word earns its place, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (6 parameters, no output schema, no annotations), the description is incomplete. It doesn't cover output format, error handling, or practical usage scenarios. While concise, it fails to provide enough context for an agent to use the tool effectively without additional inference or trial-and-error.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds minimal semantic context beyond the input schema. It implies parameters like 'route' (via departure/arrival) and time scope, but doesn't explain relationships between parameters (e.g., how start_date and weeks interact) or provide usage examples. With 100% schema description coverage, the baseline is 3, and the description doesn't significantly enhance parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Weekly price calendar for a route. Find the cheapest week to fly.' It specifies the verb ('find') and resource ('cheapest week to fly'), but doesn't explicitly differentiate from sibling tools like 'cheapest_flights' or 'flight_search', which likely serve similar purposes. The description is clear but lacks sibling differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools or contexts where this tool is preferred over others like 'cheapest_flights' or 'flight_search'. There's no indication of prerequisites, exclusions, or comparative advantages.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
route_compareCInspect
Compare flight prices by airline for a route. Shows cheapest per airline, stops, duration.
| Name | Required | Description | Default |
|---|---|---|---|
| date | No | Date YYYY-MM-DD | |
| arrival | No | Arrival IATA code | |
| country | No | Country (default: us) | |
| currency | No | Currency (default: USD) | |
| departure | No | Departure IATA code | |
| return_date | No | Return date (optional) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions what the tool shows (cheapest per airline, stops, duration) but doesn't disclose behavioral traits like whether it requires authentication, rate limits, pagination, error conditions, or what format the output takes. For a tool with no annotations and no output schema, this leaves significant gaps in understanding how it behaves.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise—a single sentence that efficiently conveys the core functionality. Every word earns its place: 'Compare flight prices by airline for a route' establishes the purpose, and 'Shows cheapest per airline, stops, duration' adds valuable detail about what's displayed. No wasted words or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations, no output schema, and 6 parameters, the description is incomplete. While it states what the tool does, it doesn't cover important contextual aspects like output format, error handling, authentication needs, or how results are structured. For a comparison tool with multiple parameters, users need more information about what to expect from the tool's behavior and results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all 6 parameters with their types and basic descriptions. The description adds no additional parameter semantics beyond what's in the schema—it doesn't explain relationships between parameters (e.g., that 'return_date' makes it a round-trip search) or provide examples. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Compare flight prices by airline for a route' with specific details about what it shows (cheapest per airline, stops, duration). It distinguishes from siblings like 'cheapest_flights' by focusing on airline comparison rather than just finding cheapest options, but doesn't explicitly contrast with all alternatives like 'flight_search' or 'multi_city'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided about when to use this tool versus alternatives like 'cheapest_flights', 'flight_search', or 'multi_city'. The description implies it's for comparing airline prices on a route, but doesn't specify scenarios where this is preferred over other flight search tools or mention any prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.