Last Minute Deals HQ
Server Details
Real-time last-minute tour and activity booking across 17 suppliers in 15 countries.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- johnanleitner1-Coder/lastminutedeals-api
- GitHub Stars
- 0
- Server Listing
- lastminutedeals-api
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.4/5 across 6 of 6 tools scored.
Each tool has a clearly distinct purpose with no overlap: book_from_itinerary converts itineraries to bookings, book_slot books slots directly, get_booking_status checks status, get_supplier_info provides supplier details, preview_slot generates shareable booking links, and search_slots finds available slots. The descriptions clearly differentiate their functions, eliminating any ambiguity.
All tool names follow a consistent verb_noun pattern (e.g., book_slot, get_booking_status, search_slots), using snake_case throughout. The naming is predictable and readable, with no deviations in style or convention.
With 6 tools, the count is well-scoped for a last-minute travel booking server, covering core workflows like searching, booking, status checking, and supplier info without being overwhelming. Each tool earns its place in the set.
The tool set provides comprehensive coverage for last-minute travel bookings, including search, booking (with multiple modes), status checking, and supplier info. A minor gap exists in lacking explicit update or cancellation tools, but agents can work around this using existing tools like get_booking_status for status management.
Available Tools
6 toolsbook_from_itineraryARead-onlyIdempotentInspect
Convert a travel itinerary into real bookings. Accepts raw itinerary text (natural language, bullet points, or structured), extracts destinations and activity mentions, and matches them against live inventory. Returns booking page URLs for each matched activity. Use this when a user has an itinerary and wants to book the activities they can. Not all items will match — the response shows which matched and which didn't.
| Name | Required | Description | Default |
|---|---|---|---|
| itinerary | Yes | Raw itinerary text. Can be natural language, bullet points, or structured. Must mention at least one destination (city or country name). Example: '3 days in Iceland: glacier hike, northern lights tour, horse riding' |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it explains the matching process, partial success nature, and returns booking page URLs. Annotations already indicate read-only, non-destructive, idempotent operations, but the description provides specific workflow details about how matches are handled.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with three focused sentences: purpose, process, and usage guidelines. Every sentence adds value without redundancy, and key information is front-loaded appropriately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with comprehensive annotations and no output schema, the description provides good context about the matching process and return format. It could slightly improve by mentioning authentication needs or rate limits, but covers the core functionality well given the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already documents the single 'itinerary' parameter thoroughly. The description adds minimal additional parameter semantics beyond what's in the schema, maintaining the baseline score for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: converting travel itineraries into real bookings by extracting destinations/activities and matching against live inventory. It specifies the verb ('convert'), resource ('travel itinerary'), and distinguishes from siblings by focusing on itinerary processing rather than slot booking or status checking.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit guidance is provided: 'Use this when a user has an itinerary and wants to book the activities they can.' It also sets expectations about partial matching ('Not all items will match'), helping the agent understand when this tool is appropriate versus alternatives like search_slots or book_slot.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
book_slotAInspect
Book a last-minute slot for a customer. Two modes: (1) APPROVAL MODE (default): creates a Stripe Checkout Session and returns a checkout_url — you MUST share this URL with the customer immediately so they can complete payment. Booking is confirmed with the supplier after payment. (2) AUTONOMOUS MODE: if you supply a wallet_id (pre-funded agent wallet) and execution_mode='autonomous', the booking completes immediately and returns a confirmation_number directly — no checkout step, no human action required. Use autonomous mode when your application manages payment on behalf of the customer. Bookings are real and go directly to the supplier.
| Name | Required | Description | Default |
|---|---|---|---|
| slot_id | Yes | Slot ID from search_slots results. Required. | |
| quantity | No | Number of people to book. Default: 1. Price is per person × quantity. | |
| wallet_id | No | Pre-funded agent wallet ID (format: wlt_...). Provide this to enable autonomous mode. | |
| customer_name | Yes | Full name of the person attending the experience. | |
| customer_email | Yes | Email address where booking confirmation will be sent. | |
| customer_phone | Yes | Phone number including country code (e.g. +15550001234). | |
| execution_mode | No | Set to 'autonomous' when providing a wallet_id. Omit for standard approval (checkout URL) flow. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a non-read-only, non-destructive tool, but the description adds valuable behavioral context beyond that. It explains that APPROVAL MODE creates a Stripe Checkout Session requiring immediate URL sharing, while AUTONOMOUS MODE completes bookings immediately with a confirmation number. It also notes bookings are 'real and go directly to the supplier,' clarifying the operational impact.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded with the core purpose. Each sentence adds essential information: the two modes, their workflows, and when to use autonomous mode. There is no wasted text, and the information is presented in a logical, easy-to-follow manner.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (two distinct modes, real-world booking impact) and lack of output schema, the description does a strong job of explaining the different outcomes (checkout_url vs. confirmation_number) and workflows. It could slightly improve by explicitly mentioning error cases or prerequisites, but it covers the key operational context well.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds some semantic context by explaining how wallet_id and execution_mode interact to enable autonomous mode, but it doesn't provide significant additional parameter details beyond what the schema offers.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Book a last-minute slot for a customer.' It specifies two distinct modes (APPROVAL and AUTONOMOUS) and differentiates from sibling tools like search_slots (which finds slots) and get_booking_status (which checks status). The verb 'book' is specific and actionable.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use each mode: APPROVAL MODE (default) for standard bookings requiring customer payment, and AUTONOMOUS MODE when 'your application manages payment on behalf of the customer.' It also distinguishes from siblings by focusing on booking rather than searching or checking status.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_booking_statusARead-onlyIdempotentInspect
Check the status of a booking by booking_id. Returns status (pending, confirmed, failed, or cancelled), confirmation number, service details, and price charged.
| Name | Required | Description | Default |
|---|---|---|---|
| booking_id | Yes | The booking_id string returned by book_slot (format: bk_...). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds value by specifying the return data (status, confirmation number, service details, price charged), which isn't in annotations, but doesn't disclose additional behavioral traits like error conditions, rate limits, or authentication needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by a concise list of return values. Every sentence adds necessary information without redundancy, making it efficient and well-structured for quick understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 parameter, no output schema) and rich annotations covering key behavioral aspects, the description is mostly complete. It specifies the return data, which compensates for the lack of output schema, but could improve by mentioning error cases or when the tool might fail.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the booking_id parameter fully documented in the schema. The description adds minimal semantics by reinforcing the parameter's purpose ('by booking_id') and implying it's from book_slot, but doesn't provide extra details beyond what the schema already states, meeting the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Check the status'), target resource ('a booking'), and key identifier ('by booking_id'). It distinguishes from siblings like book_slot (creation), get_supplier_info (supplier data), and search_slots (availability search) by focusing on status retrieval for existing bookings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implicitly indicates usage when needing booking status details, with the required booking_id parameter suggesting it's for existing bookings. However, it lacks explicit guidance on when to use this versus alternatives (e.g., if status is already known from other operations) or any prerequisites beyond having a booking_id.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_supplier_infoARead-onlyIdempotentInspect
Returns information about the supplier network: available destinations, experience categories, booking platforms, and protocol details. Call this before search_slots to understand what regions and activity types are available.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotations already provide comprehensive behavioral hints (readOnlyHint: true, destructiveHint: false, idempotentHint: true, openWorldHint: false). The description adds valuable context about what information is returned (destinations, experience categories, booking platforms, protocol details) and the tool's strategic purpose in the workflow, which goes beyond what annotations can convey. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences that each serve a distinct purpose: the first explains what the tool returns, and the second provides crucial usage guidance. There is zero wasted language, and the information is front-loaded effectively.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (no parameters, comprehensive annotations, no output schema), the description provides excellent context about what information is returned and when to use it. The only minor gap is the lack of output format details, but the description compensates well by explaining the content categories that will be returned.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0 parameters and 100% schema description coverage, the baseline would be 4. The description appropriately doesn't discuss parameters since none exist, but it does provide context about what information the tool returns, which helps the agent understand the output semantics despite the lack of an output schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Returns information about the supplier network') and resources ('available destinations, experience categories, booking platforms, and protocol details'). It distinguishes from sibling tools by explaining this is for understanding available regions and activity types before using search_slots.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('Call this before search_slots to understand what regions and activity types are available'), creating a clear workflow relationship with a named alternative tool. This gives the agent clear context about the tool's role in the sequence of operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
preview_slotARead-onlyIdempotentInspect
Get a shareable booking page URL for a slot. Returns a link the user can open in their browser to see full details and complete the booking themselves. Use this instead of book_slot when the user is a human who will pay directly — they enter their own name, email, and phone on the page and pay via Stripe. No need to collect customer details yourself.
| Name | Required | Description | Default |
|---|---|---|---|
| slot_id | Yes | Slot ID from search_slots results. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already cover safety aspects (readOnly, non-destructive, idempotent), but the description adds valuable context about the user flow (human pays via Stripe, enters own details) and output format (shareable URL). No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in three sentences: purpose, usage context, and exclusion guidance. Every sentence adds value without redundancy, and key information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with good annotations and full schema coverage, the description provides excellent context about the user workflow and output. The only minor gap is lack of explicit mention of what happens after link generation (e.g., expiration, tracking).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with one well-documented parameter, so the description doesn't need to add parameter details. It appropriately focuses on tool behavior rather than repeating schema information.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Get a shareable booking page URL') and resource ('for a slot'), and explicitly distinguishes it from the sibling 'book_slot' by explaining it returns a link for user self-booking rather than direct booking.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('when the user is a human who will pay directly') and when not to use it ('No need to collect customer details yourself'), and names the alternative ('Use this instead of book_slot').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_slotsARead-onlyIdempotentInspect
Search available last-minute tours, activities, and experiences worldwide. Queries live production inventory from 42 suppliers across Iceland, Italy, Egypt, Japan, Morocco, Portugal, Tanzania, Finland, Montenegro, Romania, Turkey, USA, UK, China, Mexico, Costa Rica, and Brazil via the OCTO booking standard. Results sorted by urgency (soonest first). Call this first when a user asks about tours. Follow up with preview_slot for a booking link or book_slot to book directly.
| Name | Required | Description | Default |
|---|---|---|---|
| city | No | City or country filter, partial match (e.g. 'Rome', 'Iceland'). Leave empty for all locations. | |
| category | No | Category filter (e.g. 'experiences'). Leave empty for all. | |
| max_price | No | Maximum price in USD. Omit or set to 0 for all prices. | |
| hours_ahead | No | Return slots starting within this many hours. Default: 168 (1 week). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already cover key behavioral traits (read-only, open-world, idempotent, non-destructive), but the description adds valuable context beyond this: it discloses the specific supplier sources (17 named providers), the sorting method ('sorted by urgency (soonest first)'), and the real-time nature ('real inventory'). No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, starting with the core purpose. However, the lengthy list of 17 supplier names could be trimmed for brevity without losing essential context (e.g., summarizing as '17 Bokun suppliers'). The sentences are otherwise efficient and purposeful.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (search with filtering), rich annotations, and full schema coverage, the description is largely complete. It adds supplier details and sorting behavior not in structured fields. The absence of an output schema is a minor gap, but the description hints at return content ('Returns real inventory'), making it adequate for the context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema fully documents all four parameters (city, category, max_price, hours_ahead). The description adds minimal value by listing the parameters in a different phrasing ('Use city/category/hours_ahead/max_price to filter'), but doesn't provide additional semantic context beyond what's in the schema, meeting the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Search for last-minute available tours and activities') and resources ('real inventory from 17 Bokun suppliers'), distinguishing it from siblings like 'book_slot' (booking) and 'get_supplier_info' (supplier metadata). It explicitly mentions the OCTO open booking protocol, adding technical specificity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool vs. alternatives: it instructs to 'Call get_supplier_info first to see all available destinations' for broader context, clearly differentiating from sibling tools. It also implies usage for last-minute availability searches, though it doesn't explicitly state when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.