Skip to main content
Glama

Server Details

Flight search & booking for AI agents. 400+ airlines, $20-50 cheaper than OTAs.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
LetsFG/LetsFG
GitHub Stars
570
Server Listing
LetsFG

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

8 tools
book_flightAInspect

Book an unlocked flight. Creates a real airline PNR with e-ticket.

REQUIREMENTS:

  1. Offer must be unlocked first (call unlock_flight_offer)

  2. Use passenger_id from search results

  3. Use REAL passenger details — airline sends e-ticket to the email provided

Requires GitHub star verification.

ParametersJSON Schema
NameRequiredDescriptionDefault
offer_idYesUnlocked offer ID (off_xxx)
passengersYesPassenger details with id from search results
contact_emailYesContact email for booking confirmation
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full disclosure burden. It reveals critical behavioral traits: creates 'real' PNR (implies irreversible commitment), triggers email side-effects ('airline sends e-ticket'), and requires external auth ('GitHub star verification'). Deducted one point for not explicitly stating cancellation policies or failure rollback behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Front-loaded with purpose statement, followed by structured REQUIREMENTS list, ending with auth constraint. Every sentence serves distinct purpose (action, prerequisites, consequences, permissions). No redundant or filler text despite covering complex workflow.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for a booking mutation tool with nested passenger objects. Covers prerequisites, data lineage, side effects, and auth. Lacks description of return value or confirmation structure, though absence of output_schema reduces obligation here.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline of 3. Description adds semantic value by contextualizing parameters within the workflow: 'passenger_id from search results' links to search_flights output, and 'REAL passenger details' establishes data quality requirements not captured in schema field descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with specific verb ('Book') and resource ('unlocked flight'), immediately distinguishing it from sibling 'search_flights' and 'unlock_flight_offer'. It clarifies the tangible outcome ('real airline PNR with e-ticket') beyond just the action.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly lists three numbered requirements: prerequisite tool ('call unlock_flight_offer'), data source ('passenger_id from search results'), and data quality standards ('Use REAL passenger details'). This provides clear when-to-use sequencing and warns against synthetic data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_agent_profileAInspect

View your agent profile — payment status, GitHub star verification, and usage stats. Use this to check if you're verified.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses what data the profile contains (payment, GitHub stars, usage stats), which adds context. However, it lacks disclosure about operational aspects like whether this is a read-only operation, side effects, or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences total with zero waste. First sentence establishes function and return value; second provides usage context. Every word earns its place with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter tool without output schema or annotations, the description adequately covers what the tool retrieves. It could be improved by explicitly noting the read-only/safe nature of the operation since no annotations declare readOnlyHint.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters, which per guidelines establishes a baseline of 4. No parameter description is needed, and the description correctly focuses on the tool's output rather than inventing parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'View' with resource 'agent profile' and explicitly lists returned data (payment status, GitHub star verification, usage stats). It clearly distinguishes from sibling tools which are travel/booking focused (book_flight, search_hotels, etc.).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit usage guidance: 'Use this to check if you're verified.' This establishes when to use the tool (verification checks). However, it doesn't explicitly state when NOT to use it or contrast with alternatives like link_github for verification purposes.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

resolve_locationAInspect

Convert a city or airport name to IATA codes for use in flight search. Use when the user says 'London' instead of 'LON'.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesCity or airport name (e.g. 'London', 'New York', 'Heathrow')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full behavioral burden. It explains the core transformation but omits critical operational details: whether it returns multiple codes for cities (e.g., London→LHR,LGW), error behavior for unknown locations, or the return data structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two tightly constructed sentences with zero redundancy. First sentence establishes function; second provides usage condition. Perfectly front-loaded and appropriately sized for the tool's simplicity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description should disclose the return format (single code vs. array, handling of multi-airport cities). For a single-parameter lookup tool, the description is minimally adequate but leaves gaps in the contract regarding response structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description adds value by providing the concrete 'London' vs 'LON' example, which clarifies the disambiguation semantics beyond the generic schema description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the conversion function (city/airport name to IATA codes) and mentions the domain (flight search). However, it could explicitly distinguish itself as a preprocessing/utility step versus the actual search_flights sibling tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides a concrete usage trigger ('Use when the user says 'London' instead of 'LON''), effectively illustrating the disambiguation use case. Lacks explicit workflow guidance mentioning that the output feeds into search_flights or book_flight.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_flightsAInspect

Search for flights between any two cities/airports worldwide. 400+ airlines via GDS/NDC + 140 local connectors fire in parallel. Returns offers with prices, airlines, times, conditions. FREE.

IMPORTANT: Response includes passenger_ids — save them for booking.

Requires GitHub star verification. If not verified, call link_github first.

ParametersJSON Schema
NameRequiredDescriptionDefault
adultsNoNumber of adult passengers (default: 1)
originYesIATA departure code (e.g. LON, JFK). Use resolve_location if you only have a city name.
childrenNoNumber of children (ages 2-11)
currencyNoCurrency code (EUR, USD, GBP)EUR
date_fromYesDeparture date YYYY-MM-DD
cabin_classNoM=economy, W=premium, C=business, F=first
destinationYesIATA arrival code (e.g. BCN, LAX)
max_resultsNoMax offers to return (default: 10)
return_fromNoReturn date YYYY-MM-DD (omit for one-way)
response_modeNo'summary' (default) — compact price/airline/route per offer, saves tokens. 'full' — includes segments, conditions, bags, durations.summary
departure_time_toNoLatest departure time (HH:MM, 24h). E.g. '14:00' for departures before 2pm
departure_time_fromNoEarliest departure time (HH:MM, 24h). E.g. '08:00' for morning flights
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries significant behavioral disclosure burden, successfully indicating cost ('FREE'), implementation scope ('400+ airlines via GDS/NDC'), and output characteristics ('Response includes `passenger_ids`'). It also discloses the authentication requirement ('Requires GitHub star verification'), though it omits rate limits or error handling specifics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description efficiently organizes information into three distinct functional blocks: capability statement, critical output warning, and authentication prerequisite. Every sentence serves a specific purpose, with the 'IMPORTANT' flag appropriately highlighting the `passenger_ids` retention requirement without unnecessary verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description adequately compensates by detailing the return structure ('offers with prices, airlines, times, conditions') and specifically highlighting the critical `passenger_ids` field needed for subsequent booking. The authentication requirement and parallel search architecture provide sufficient context for a complex 10-parameter flight search tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema achieves 100% description coverage with clear type hints and enum values (e.g., 'M=economy, W=premium'), so the description appropriately does not redundantly document parameters. The description adds minimal semantic value beyond the schema, mentioning only that it searches 'between any two cities/airports,' which aligns with the origin/destination parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly defines the tool's scope with 'Search for flights between any two cities/airports worldwide' and specifies the return values ('offers with prices, airlines, times, conditions'). It effectively distinguishes this from the booking sibling tool by noting that results are for searching/saving passenger_ids rather than completing transactions, establishing the workflow boundary.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit prerequisite guidance stating 'Requires GitHub star verification. If not verified, call `link_github` first,' establishing a clear conditional workflow. It also indicates the downstream workflow by noting that `passenger_ids` should be saved for booking, implicitly guiding the agent toward the `book_flight` tool, though it does not explicitly differentiate from `search_hotels`.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_hotelsAInspect

Search for hotels in a city or near coordinates. 300,000+ properties via wholesale suppliers — cheaper than Booking.com. Returns hotels with rooms, prices, photos, cancellation policies. FREE.

Requires GitHub star verification. If not verified, call link_github first.

ParametersJSON Schema
NameRequiredDescriptionDefault
roomsNoNumber of rooms (default: 1)
adultsNoNumber of adult guests (default: 2)
checkinYesCheck-in date YYYY-MM-DD
checkoutYesCheck-out date YYYY-MM-DD
currencyNoCurrency code (EUR, USD, GBP)EUR
locationYesCity name, airport code, or 'lat,lon' coordinates
max_resultsNoMax hotels to return (default: 10)
response_modeNo'summary' (default) — compact name/price/rating per hotel, saves tokens. 'full' — includes rooms, photos, policies, amenities.summary
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. It successfully indicates the read-only nature via 'Search', discloses the data source ('wholesale suppliers'), value proposition ('cheaper than Booking.com'), return payload contents ('rooms, prices, photos, cancellation policies'), cost ('FREE'), and authentication requirements. It misses explicit idempotency or rate limit statements, but covers the critical behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with zero waste: core action (sentence 1), scope/value prop (sentence 2), return data (sentence 3), cost (sentence 4), and prerequisites (sentences 5-6). Every sentence earns its place and information is front-loaded appropriately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, the description adequately compensates by listing the return payload fields (rooms, prices, photos, policies) and explaining the verification prerequisite. It provides sufficient context for an agent to invoke the tool correctly, though it could be improved by mentioning pagination or caching behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all 8 parameters including location formats and response modes. The description adds minimal semantic value beyond the schema—primarily reinforcing that location accepts 'city or near coordinates' (already documented in schema). Baseline score appropriate given schema completeness.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Search for hotels') and clearly defines the searchable resources ('city or near coordinates'). It distinguishes itself from sibling flight tools by explicitly mentioning 'hotels' and 'properties', and adds scope context ('300,000+ properties') that differentiates it from generic search tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit prerequisite guidance ('Requires GitHub star verification. If not verified, call `link_github` first'), clearly stating when to invoke a sibling tool before this one. It lacks explicit 'when not to use' guidance (e.g., vs. resolve_location), but the prerequisite instruction is highly specific and actionable.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

setup_paymentAInspect

Attach a payment card. Required before booking.

For testing: {"token": "tok_visa"} For production: {"payment_method_id": "pm_xxx"} from Stripe.js

One-time setup — all future charges are automatic.

Requires GitHub star verification.

ParametersJSON Schema
NameRequiredDescriptionDefault
tokenNoStripe token (e.g. tok_visa for testing)
payment_method_idNoStripe PaymentMethod ID (pm_xxx)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, but description carries the burden well: discloses 'all future charges are automatic' (side effects), 'One-time setup' (idempotency hint), and 'Requires GitHub star verification' (prerequisites). Missing only error state details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Efficiently packed with no waste. Front-loaded with purpose, followed by environment-specific examples, then behavioral notes. Slightly dense but all sentences earn their place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given simple 2-parameter schema and no output schema, description adequately covers prerequisites (GitHub star), environment differences, persistence model, and relationship to booking workflow.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with examples, but description adds critical usage context: mapping 'token' to testing environment and 'payment_method_id' to production with Stripe.js source. Explains the mutual exclusivity implicitly through environment separation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action ('Attach a payment card') and explicitly distinguishes from sibling booking tools by noting it's 'Required before booking'. Clear verb-resource combination with workflow context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use ('Required before booking', 'One-time setup') and distinguishes testing vs production environments. Implicitly indicates not to use if already set up or when not booking.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

unlock_flight_offerAInspect

Confirm live price and reserve a flight offer for 30 minutes. After unlocking, call book_flight to complete the booking.

Requires GitHub star verification.

ParametersJSON Schema
NameRequiredDescriptionDefault
offer_idYesOffer ID from search results (off_xxx)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and successfully discloses key behavioral traits: the 30-minute reservation window, live price confirmation behavior, and GitHub star authentication requirement. Could improve by mentioning what happens when the reservation expires or if the offer becomes invalid.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: first defines the action, second establishes the workflow relationship with book_flight, third states the authentication prerequisite. Information is front-loaded and appropriately sized for a single-parameter tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete for the tool's complexity (1 parameter, no output schema). Describes the temporal effect (30-minute hold), workflow position, and authentication barrier. Minor gap: does not describe the return value or confirmation structure, though no output schema exists to require this.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (the offer_id parameter is fully documented with type and format 'off_xxx'). The description does not add additional semantic context about the parameter, but with complete schema coverage, no additional description is necessary. Baseline score appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific actions performed: 'Confirm live price and reserve a flight offer for 30 minutes.' It uses distinct verbs (confirm, reserve) and distinguishes itself from siblings search_flights (finding offers) and book_flight (completing purchase).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Excellent workflow guidance: explicitly names sibling tool book_flight as the next step ('After unlocking, call book_flight to complete the booking'). Also states a critical prerequisite ('Requires GitHub star verification'), clearly defining when the tool can be used.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.