Skip to main content
Glama

Server Details

Tripuck — Flight Meta-Search & Meeting Point

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.4/5 across 5 of 5 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool serves a distinct purpose: cheapest_dates for monthly price calendar, find_meeting_point for group destinations, flight_details for specific flight info, popular_routes for destination inspiration, and search_flights for specific route/date searches. There is no overlap or ambiguity.

Naming Consistency4/5

All tool names use snake_case and are descriptive, but there is a mix of verb-starting names (find_meeting_point, search_flights) and adjective-starting names (cheapest_dates, popular_routes). This minor inconsistency prevents a perfect score.

Tool Count5/5

With 5 tools, the server covers the essential flight search and planning features without being overwhelming. Each tool adds clear value for different user needs, from inspiration to detailed flight data.

Completeness4/5

The tool set covers search, date flexibility, popular routes, group travel, and flight details. Minor gaps include lack of a multi-city search tool or direct booking capability, but the core flight information needs are well addressed.

Available Tools

5 tools
cheapest_datesTripuck Cheapest DatesA
Read-onlyIdempotent
Inspect

Tripuck price calendar — for a given route, returns the cheapest daily price across a full month. Use this when the user shows date flexibility: "when is the cheapest day to fly IST-AYT in April?", "hangi gün daha ucuz?", "أرخص يوم للسفر", "günstigste Tage für...". Use when the user asks about cheap days, flexible travel windows, or month-level price overviews. The LLM MUST infer the user language from the conversation and pass it via the locale parameter ("tr" Turkish, "en" English, "ar" Arabic, "az" Azerbaijani, "de" German, "ka" Georgian, "uz" Uzbek). All widget UI text and the text response are then returned in that language. If currency is not specified, a sensible default is picked from the locale (tr→TRY, en→USD, de→EUR, ar→USD, az→AZN, ka→GEL, uz→UZS).

ParametersJSON Schema
NameRequiredDescriptionDefault
monthNoTarget month in YYYY-MM. If omitted, the current month is used.
marketNoMarket / language code — controls pricing source and widget language. User language as BCP-47 or 2-letter code. Supported: "tr" Turkish, "en" English, "ar" Arabic, "az" Azerbaijani, "de" German, "ka" Georgian, "uz" Uzbek. The LLM MUST infer this from the conversation and pass it explicitly; widget UI and response text will be rendered in this language.
oneWayNoOne-way search. If false, round-trip prices are returned.
originYesDeparture IATA code.
currencyNoISO 4217 currency code. Examples: TRY, USD, EUR, AZN, GEL, UZS. If omitted, a sensible default is picked from the locale (tr→TRY, en→USD, de→EUR, ar→USD, az→AZN, ka→GEL, uz→UZS).
destinationYesArrival IATA code.
tripDurationNoTrip length in days for round-trip (used when oneWay=false).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds behavioral details beyond annotations: language rendering for UI and text responses, currency default mapping per locale. Annotations only indicate readOnly and idempotent, so this adds value.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Every sentence earns its place. The description is front-loaded with purpose, then usage examples, then parameter behavior. No fluff or repetition.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 7 parameters, 2 required, and no output schema, the description covers purpose, usage, parameter inference, and return behavior (language-specific widget/text). It is complete for the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, baseline is 3. The description adds important guidance: LLM must infer locale, currency defaults based on locale, and tripDuration meaning for round-trip. This enriches understanding beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description starts with a specific verb+resource: 'returns the cheapest daily price across a full month' for a given route. It clearly distinguishes this tool from siblings like search_flights by focusing on price calendars and date flexibility.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit when-to-use examples (e.g., 'when the user shows date flexibility' with multilingual queries) and explains the locale/currency inference. It does not explicitly state when not to use, but the context is clear enough.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_meeting_pointTripuck Meeting PointA
Read-onlyIdempotent
Inspect

Tripuck Meeting Point — finds the cheapest and fairest common destination for 2-5 people traveling from different cities. Example: "I'm in Istanbul, my friend is in Berlin, one is in Dubai — where should we meet?", "İstanbul, Berlin, Dubai nerede buluşalım?", "نحن في مدن مختلفة، أين نلتقي؟". Runs multi-city optimization and computes a fairness score across the group. Use when the user asks to coordinate a trip across multiple origins and needs a shared destination. The LLM MUST infer the user language from the conversation and pass it via the locale parameter ("tr" Turkish, "en" English, "ar" Arabic, "az" Azerbaijani, "de" German, "ka" Georgian, "uz" Uzbek). All widget UI text and the text response are then returned in that language. If currency is not specified, a sensible default is picked from the locale (tr→TRY, en→USD, de→EUR, ar→USD, az→AZN, ka→GEL, uz→UZS).

ParametersJSON Schema
NameRequiredDescriptionDefault
localeNoUser language as BCP-47 or 2-letter code. Supported: "tr" Turkish, "en" English, "ar" Arabic, "az" Azerbaijani, "de" German, "ka" Georgian, "uz" Uzbek. The LLM MUST infer this from the conversation and pass it explicitly.
periodNoPeriod: "year", "season", "month" or explicit "YYYY-MM".season
sortByNo'cheapest' = minimum total cost, 'fairest' = most balanced cost across travellers (Tripuck USP), 'least-transfers' = fewest layovers.cheapest
originsYesList of origin IATA codes (2-5 cities). Example: ["IST","BER","DXB"] for three friends flying from Istanbul, Berlin and Dubai.
currencyNoISO 4217 currency code. Examples: TRY, USD, EUR, AZN, GEL, UZS. If omitted, a sensible default is picked from the locale (tr→TRY, en→USD, de→EUR, ar→USD, az→AZN, ka→GEL, uz→UZS).
directOnlyNoConsider only non-stop routes.
returnDateNoSpecific return date (optional), YYYY-MM-DD.
departureDateNoSpecific departure date (optional), YYYY-MM-DD.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, idempotent, non-destructive. Description adds that it runs multi-city optimization and computes a fairness score, which are key behavioral traits beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Front-loaded with purpose and examples; all sentences are informative. Could be slightly more concise but overall well-structured and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers core algorithm and locale/currency handling, but lacks description of output format or widget behavior beyond language. No output schema exists, so more detail on return structure would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions. Description adds value by explaining locale inference requirement, default currency mapping, and how locale affects response language, which schema does not cover.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it finds a common destination for 2-5 people from different cities, with specific verb 'finds' and resource 'common destination'. Distinct from siblings like search_flights or cheapest_dates.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Use when the user asks to coordinate a trip across multiple origins and needs a shared destination.' Includes examples in multiple languages. Lacks explicit when-not-to-use but context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

flight_detailsTripuck Flight DetailsA
Read-onlyIdempotent
Inspect

Detailed information for a specific Tripuck flight ID: segments, layovers, baggage allowance, fare rules, refund/change conditions, operating carrier. Use for follow-up questions after search_flights: "what is the baggage allowance on this flight?", "bu uçuşta aktarma süresi nedir?", "كم الأمتعة المسموحة؟". The LLM MUST infer the user language from the conversation and pass it via the locale parameter ("tr" Turkish, "en" English, "ar" Arabic, "az" Azerbaijani, "de" German, "ka" Georgian, "uz" Uzbek). All widget UI text and the text response are then returned in that language. If currency is not specified, a sensible default is picked from the locale (tr→TRY, en→USD, de→EUR, ar→USD, az→AZN, ka→GEL, uz→UZS).

ParametersJSON Schema
NameRequiredDescriptionDefault
localeNoUser language as BCP-47 or 2-letter code. Supported: "tr" Turkish, "en" English, "ar" Arabic, "az" Azerbaijani, "de" German, "ka" Georgian, "uz" Uzbek. The LLM MUST infer this from the conversation and pass it explicitly; widget UI and response text will be rendered in this language.
currencyNoISO 4217 currency code. Examples: TRY, USD, EUR, AZN, GEL, UZS. If omitted, a sensible default is picked from the locale (tr→TRY, en→USD, de→EUR, ar→USD, az→AZN, ka→GEL, uz→UZS).
flightIdYesTripuck flight ID — from `flight.id` in a prior `search_flights` response.
searchKeyNo`searchKey` from a prior `search_flights` response (required to retrieve async search state).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnly and idempotent. Description adds behavioral details like language-driven widget/text responses and default currency logic, enhancing transparency without contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is concise but packed with essential information: purpose, usage context, multi-language examples, and clear parameter handling instructions. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description adequately covers what the tool returns and how to use it. It ties to prior search and provides example queries, making it complete for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%. Description adds value by explaining locale must be inferred from conversation, currency defaults, and that flightId/searchKey come from prior search responses, going beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description states the tool provides detailed flight information (segments, layovers, baggage, etc.) for a specific Tripuck flight ID, clearly distinguishing it from sibling tools like search_flights or cheapest_dates.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Use for follow-up questions after `search_flights`', providing clear context. However, it does not mention when not to use or explicitly compare to siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_flightsSearch Flights on TripuckA
Read-only
Inspect

Tripuck flight meta-search — real-time fare comparison across 700+ airlines and 50+ online travel agencies. Use this tool when the user asks for flight prices, e.g. "Istanbul to Antalya tomorrow", "cheapest Paris ticket", "yarın Londra'ya uçuş", "رحلة إلى دبي غداً", "Flüge nach Berlin". Inputs: IATA codes (IST, AYT, LHR...) and dates in YYYY-MM-DD. Results deep-link to Tripuck.com for detailed review and booking. The LLM MUST infer the user language from the conversation and pass it via the locale parameter ("tr" Turkish, "en" English, "ar" Arabic, "az" Azerbaijani, "de" German, "ka" Georgian, "uz" Uzbek). All widget UI text and the text response are then returned in that language. If currency is not specified, a sensible default is picked from the locale (tr→TRY, en→USD, de→EUR, ar→USD, az→AZN, ka→GEL, uz→UZS).

ParametersJSON Schema
NameRequiredDescriptionDefault
adultsNoAdult passengers (12+ years).
localeNoUser language as BCP-47 or 2-letter code. Supported: "tr" Turkish, "en" English, "ar" Arabic, "az" Azerbaijani, "de" German, "ka" Georgian, "uz" Uzbek. The LLM MUST infer this from the conversation and pass it explicitly; widget UI and response text will be rendered in this language.
originYesDeparture IATA code (3 letters). Examples: "IST" Istanbul, "AYT" Antalya, "LHR" London Heathrow.
infantsNoInfant passengers (0-2 years).
childrenNoChild passengers (2-11 years).
currencyNoISO 4217 currency code. Examples: TRY, USD, EUR, AZN, GEL, UZS. If omitted, a sensible default is picked from the locale (tr→TRY, en→USD, de→EUR, ar→USD, az→AZN, ka→GEL, uz→UZS).
cabinClassNoCabin class.economy
returnDateNoReturn date in YYYY-MM-DD — only for round-trip.
destinationYesArrival IATA code (3 letters). The LLM should translate city names to IATA codes.
departureDateYesDeparture date in YYYY-MM-DD.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description comprehensively explains behavioral traits: real-time search, deep-linking to Tripuck, locale-based language and currency inference. These go well beyond the annotations, which only indicate read-only and non-destructive behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured, front-loading the purpose and examples. It is concise yet covers inputs, behavioral details, and instructions. Every sentence adds essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers purpose, usage, and parameters thoroughly. However, it fails to describe the output format or structure, which is a significant gap given the absence of an output schema. The agent might not know what to expect in the response.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema already has 100% description coverage. The description adds value by clarifying the locale inference rule, currency default logic, and the LLM's responsibility for language detection. This supplements the schema without redundancy.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it is a real-time fare comparison meta-search across many airlines and OTAs, with examples of user queries. However, it does not explicitly differentiate from sibling tools like 'cheapest_dates' or 'flight_details', which could lead to ambiguity in selection.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear usage context by listing example queries and instructing when to use the tool. However, it lacks explicit guidance on when not to use it or which alternative tool to choose for specific sub-tasks.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources