Tripuck
Server Details
Tripuck — Flight Meta-Search & Meeting Point
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.4/5 across 5 of 5 tools scored.
Each tool serves a distinct purpose: cheapest_dates for monthly price calendar, find_meeting_point for group destinations, flight_details for specific flight info, popular_routes for destination inspiration, and search_flights for specific route/date searches. There is no overlap or ambiguity.
All tool names use snake_case and are descriptive, but there is a mix of verb-starting names (find_meeting_point, search_flights) and adjective-starting names (cheapest_dates, popular_routes). This minor inconsistency prevents a perfect score.
With 5 tools, the server covers the essential flight search and planning features without being overwhelming. Each tool adds clear value for different user needs, from inspiration to detailed flight data.
The tool set covers search, date flexibility, popular routes, group travel, and flight details. Minor gaps include lack of a multi-city search tool or direct booking capability, but the core flight information needs are well addressed.
Available Tools
5 toolscheapest_datesTripuck Cheapest DatesARead-onlyIdempotentInspect
Tripuck price calendar — for a given route, returns the cheapest daily price across a full month. Use this when the user shows date flexibility: "when is the cheapest day to fly IST-AYT in April?", "hangi gün daha ucuz?", "أرخص يوم للسفر", "günstigste Tage für...". Use when the user asks about cheap days, flexible travel windows, or month-level price overviews. The LLM MUST infer the user language from the conversation and pass it via the locale parameter ("tr" Turkish, "en" English, "ar" Arabic, "az" Azerbaijani, "de" German, "ka" Georgian, "uz" Uzbek). All widget UI text and the text response are then returned in that language. If currency is not specified, a sensible default is picked from the locale (tr→TRY, en→USD, de→EUR, ar→USD, az→AZN, ka→GEL, uz→UZS).
| Name | Required | Description | Default |
|---|---|---|---|
| month | No | Target month in YYYY-MM. If omitted, the current month is used. | |
| market | No | Market / language code — controls pricing source and widget language. User language as BCP-47 or 2-letter code. Supported: "tr" Turkish, "en" English, "ar" Arabic, "az" Azerbaijani, "de" German, "ka" Georgian, "uz" Uzbek. The LLM MUST infer this from the conversation and pass it explicitly; widget UI and response text will be rendered in this language. | |
| oneWay | No | One-way search. If false, round-trip prices are returned. | |
| origin | Yes | Departure IATA code. | |
| currency | No | ISO 4217 currency code. Examples: TRY, USD, EUR, AZN, GEL, UZS. If omitted, a sensible default is picked from the locale (tr→TRY, en→USD, de→EUR, ar→USD, az→AZN, ka→GEL, uz→UZS). | |
| destination | Yes | Arrival IATA code. | |
| tripDuration | No | Trip length in days for round-trip (used when oneWay=false). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds behavioral details beyond annotations: language rendering for UI and text responses, currency default mapping per locale. Annotations only indicate readOnly and idempotent, so this adds value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Every sentence earns its place. The description is front-loaded with purpose, then usage examples, then parameter behavior. No fluff or repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 7 parameters, 2 required, and no output schema, the description covers purpose, usage, parameter inference, and return behavior (language-specific widget/text). It is complete for the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, baseline is 3. The description adds important guidance: LLM must infer locale, currency defaults based on locale, and tripDuration meaning for round-trip. This enriches understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description starts with a specific verb+resource: 'returns the cheapest daily price across a full month' for a given route. It clearly distinguishes this tool from siblings like search_flights by focusing on price calendars and date flexibility.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit when-to-use examples (e.g., 'when the user shows date flexibility' with multilingual queries) and explains the locale/currency inference. It does not explicitly state when not to use, but the context is clear enough.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
find_meeting_pointTripuck Meeting PointARead-onlyIdempotentInspect
Tripuck Meeting Point — finds the cheapest and fairest common destination for 2-5 people traveling from different cities. Example: "I'm in Istanbul, my friend is in Berlin, one is in Dubai — where should we meet?", "İstanbul, Berlin, Dubai nerede buluşalım?", "نحن في مدن مختلفة، أين نلتقي؟". Runs multi-city optimization and computes a fairness score across the group. Use when the user asks to coordinate a trip across multiple origins and needs a shared destination. The LLM MUST infer the user language from the conversation and pass it via the locale parameter ("tr" Turkish, "en" English, "ar" Arabic, "az" Azerbaijani, "de" German, "ka" Georgian, "uz" Uzbek). All widget UI text and the text response are then returned in that language. If currency is not specified, a sensible default is picked from the locale (tr→TRY, en→USD, de→EUR, ar→USD, az→AZN, ka→GEL, uz→UZS).
| Name | Required | Description | Default |
|---|---|---|---|
| locale | No | User language as BCP-47 or 2-letter code. Supported: "tr" Turkish, "en" English, "ar" Arabic, "az" Azerbaijani, "de" German, "ka" Georgian, "uz" Uzbek. The LLM MUST infer this from the conversation and pass it explicitly. | |
| period | No | Period: "year", "season", "month" or explicit "YYYY-MM". | season |
| sortBy | No | 'cheapest' = minimum total cost, 'fairest' = most balanced cost across travellers (Tripuck USP), 'least-transfers' = fewest layovers. | cheapest |
| origins | Yes | List of origin IATA codes (2-5 cities). Example: ["IST","BER","DXB"] for three friends flying from Istanbul, Berlin and Dubai. | |
| currency | No | ISO 4217 currency code. Examples: TRY, USD, EUR, AZN, GEL, UZS. If omitted, a sensible default is picked from the locale (tr→TRY, en→USD, de→EUR, ar→USD, az→AZN, ka→GEL, uz→UZS). | |
| directOnly | No | Consider only non-stop routes. | |
| returnDate | No | Specific return date (optional), YYYY-MM-DD. | |
| departureDate | No | Specific departure date (optional), YYYY-MM-DD. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, idempotent, non-destructive. Description adds that it runs multi-city optimization and computes a fairness score, which are key behavioral traits beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with purpose and examples; all sentences are informative. Could be slightly more concise but overall well-structured and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers core algorithm and locale/currency handling, but lacks description of output format or widget behavior beyond language. No output schema exists, so more detail on return structure would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions. Description adds value by explaining locale inference requirement, default currency mapping, and how locale affects response language, which schema does not cover.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it finds a common destination for 2-5 people from different cities, with specific verb 'finds' and resource 'common destination'. Distinct from siblings like search_flights or cheapest_dates.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use when the user asks to coordinate a trip across multiple origins and needs a shared destination.' Includes examples in multiple languages. Lacks explicit when-not-to-use but context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
flight_detailsTripuck Flight DetailsARead-onlyIdempotentInspect
Detailed information for a specific Tripuck flight ID: segments, layovers, baggage allowance, fare rules, refund/change conditions, operating carrier. Use for follow-up questions after search_flights: "what is the baggage allowance on this flight?", "bu uçuşta aktarma süresi nedir?", "كم الأمتعة المسموحة؟". The LLM MUST infer the user language from the conversation and pass it via the locale parameter ("tr" Turkish, "en" English, "ar" Arabic, "az" Azerbaijani, "de" German, "ka" Georgian, "uz" Uzbek). All widget UI text and the text response are then returned in that language. If currency is not specified, a sensible default is picked from the locale (tr→TRY, en→USD, de→EUR, ar→USD, az→AZN, ka→GEL, uz→UZS).
| Name | Required | Description | Default |
|---|---|---|---|
| locale | No | User language as BCP-47 or 2-letter code. Supported: "tr" Turkish, "en" English, "ar" Arabic, "az" Azerbaijani, "de" German, "ka" Georgian, "uz" Uzbek. The LLM MUST infer this from the conversation and pass it explicitly; widget UI and response text will be rendered in this language. | |
| currency | No | ISO 4217 currency code. Examples: TRY, USD, EUR, AZN, GEL, UZS. If omitted, a sensible default is picked from the locale (tr→TRY, en→USD, de→EUR, ar→USD, az→AZN, ka→GEL, uz→UZS). | |
| flightId | Yes | Tripuck flight ID — from `flight.id` in a prior `search_flights` response. | |
| searchKey | No | `searchKey` from a prior `search_flights` response (required to retrieve async search state). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnly and idempotent. Description adds behavioral details like language-driven widget/text responses and default currency logic, enhancing transparency without contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is concise but packed with essential information: purpose, usage context, multi-language examples, and clear parameter handling instructions. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description adequately covers what the tool returns and how to use it. It ties to prior search and provides example queries, making it complete for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%. Description adds value by explaining locale must be inferred from conversation, currency defaults, and that flightId/searchKey come from prior search responses, going beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description states the tool provides detailed flight information (segments, layovers, baggage, etc.) for a specific Tripuck flight ID, clearly distinguishing it from sibling tools like search_flights or cheapest_dates.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use for follow-up questions after `search_flights`', providing clear context. However, it does not mention when not to use or explicitly compare to siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
popular_routesTripuck Popular RoutesARead-onlyIdempotentInspect
Tripuck's Explore service — most popular destinations with current prices from a given origin city, aggregated from live flight inventory data. Use for inspiration-style queries where the destination is unknown: "where can I fly from Istanbul?", "İstanbul'dan nereye?", "وجهات شعبية من دبي", "populäre Reiseziele ab München". The LLM MUST infer the user language from the conversation and pass it via the locale parameter ("tr" Turkish, "en" English, "ar" Arabic, "az" Azerbaijani, "de" German, "ka" Georgian, "uz" Uzbek). All widget UI text and the text response are then returned in that language. If currency is not specified, a sensible default is picked from the locale (tr→TRY, en→USD, de→EUR, ar→USD, az→AZN, ka→GEL, uz→UZS).
| Name | Required | Description | Default |
|---|---|---|---|
| locale | No | User language as BCP-47 or 2-letter code. Supported: "tr" Turkish, "en" English, "ar" Arabic, "az" Azerbaijani, "de" German, "ka" Georgian, "uz" Uzbek. The LLM MUST infer this from the conversation and pass it explicitly; widget UI and response text will be rendered in this language. | |
| oneWay | No | One-way search. | |
| origin | Yes | Departure IATA code. | |
| period | No | Period: "year" (12 months), "season" (3 months), "month", or explicit "YYYY-MM". | season |
| currency | No | ISO 4217 currency code. Examples: TRY, USD, EUR, AZN, GEL, UZS. If omitted, a sensible default is picked from the locale (tr→TRY, en→USD, de→EUR, ar→USD, az→AZN, ka→GEL, uz→UZS). | |
| directOnly | No | Return only non-stop routes. | |
| maxTripDays | No | ||
| minTripDays | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description adds context beyond annotations: data source (live flight inventory), locale-sensitive response text and UI, and currency defaults. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Six sentences, front-loaded with purpose, then usage guidelines and locale instructions. Every sentence is informative and necessary.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers purpose, usage context, locale/currency behavior well. However, no output schema exists and description does not describe return format (e.g., list structure, pricing info).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Description adds value for locale and currency parameters (inference rules, defaults) beyond schema. However, maxTripDays and minTripDays lack description in schema and are not clarified in description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool lists popular destinations with current prices from a given origin, for inspiration-style queries. Distinguishes from siblings like search_flights by specifying use case ('destination unknown').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly tells when to use (inspiration queries) and gives example queries in multiple languages. Provides detailed instructions on locale inference and parameter passing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_flightsSearch Flights on TripuckARead-onlyInspect
Tripuck flight meta-search — real-time fare comparison across 700+ airlines and 50+ online travel agencies. Use this tool when the user asks for flight prices, e.g. "Istanbul to Antalya tomorrow", "cheapest Paris ticket", "yarın Londra'ya uçuş", "رحلة إلى دبي غداً", "Flüge nach Berlin". Inputs: IATA codes (IST, AYT, LHR...) and dates in YYYY-MM-DD. Results deep-link to Tripuck.com for detailed review and booking. The LLM MUST infer the user language from the conversation and pass it via the locale parameter ("tr" Turkish, "en" English, "ar" Arabic, "az" Azerbaijani, "de" German, "ka" Georgian, "uz" Uzbek). All widget UI text and the text response are then returned in that language. If currency is not specified, a sensible default is picked from the locale (tr→TRY, en→USD, de→EUR, ar→USD, az→AZN, ka→GEL, uz→UZS).
| Name | Required | Description | Default |
|---|---|---|---|
| adults | No | Adult passengers (12+ years). | |
| locale | No | User language as BCP-47 or 2-letter code. Supported: "tr" Turkish, "en" English, "ar" Arabic, "az" Azerbaijani, "de" German, "ka" Georgian, "uz" Uzbek. The LLM MUST infer this from the conversation and pass it explicitly; widget UI and response text will be rendered in this language. | |
| origin | Yes | Departure IATA code (3 letters). Examples: "IST" Istanbul, "AYT" Antalya, "LHR" London Heathrow. | |
| infants | No | Infant passengers (0-2 years). | |
| children | No | Child passengers (2-11 years). | |
| currency | No | ISO 4217 currency code. Examples: TRY, USD, EUR, AZN, GEL, UZS. If omitted, a sensible default is picked from the locale (tr→TRY, en→USD, de→EUR, ar→USD, az→AZN, ka→GEL, uz→UZS). | |
| cabinClass | No | Cabin class. | economy |
| returnDate | No | Return date in YYYY-MM-DD — only for round-trip. | |
| destination | Yes | Arrival IATA code (3 letters). The LLM should translate city names to IATA codes. | |
| departureDate | Yes | Departure date in YYYY-MM-DD. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description comprehensively explains behavioral traits: real-time search, deep-linking to Tripuck, locale-based language and currency inference. These go well beyond the annotations, which only indicate read-only and non-destructive behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured, front-loading the purpose and examples. It is concise yet covers inputs, behavioral details, and instructions. Every sentence adds essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers purpose, usage, and parameters thoroughly. However, it fails to describe the output format or structure, which is a significant gap given the absence of an output schema. The agent might not know what to expect in the response.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema already has 100% description coverage. The description adds value by clarifying the locale inference rule, currency default logic, and the LLM's responsibility for language detection. This supplements the schema without redundancy.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it is a real-time fare comparison meta-search across many airlines and OTAs, with examples of user queries. However, it does not explicitly differentiate from sibling tools like 'cheapest_dates' or 'flight_details', which could lead to ambiguity in selection.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear usage context by listing example queries and instructing when to use the tool. However, it lacks explicit guidance on when not to use it or which alternative tool to choose for specific sub-tasks.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!