LastMinuteDeals Booking API
Server Details
Last-minute booking slots across 11 suppliers. Search, price, and execute bookings via AI agents.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- johnanleitner1-Coder/lastminutedeals-api
- GitHub Stars
- 0
- Server Listing
- lastminutedeals-api
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 4 of 4 tools scored.
Each tool has a clearly distinct purpose with no overlap: book_slot handles booking creation, get_booking_status checks status, get_supplier_info provides supplier network details, and search_slots finds available inventory. The descriptions clearly differentiate their functions, eliminating any ambiguity for an agent.
All tool names follow a consistent verb_noun pattern (e.g., book_slot, get_booking_status, get_supplier_info, search_slots) using snake_case throughout. This predictability makes the tool set easy to navigate and understand at a glance.
With 4 tools, the count is reasonable for a booking API focused on last-minute deals, covering core operations like search, booking, status checks, and supplier info. It feels slightly lean but well-scoped, as each tool serves a distinct and essential function without bloat.
The tool set covers key workflows: discovering suppliers, searching slots, booking, and checking status, which supports a complete booking lifecycle. A minor gap is the lack of tools for updating or canceling bookings, but agents can likely work around this given the focus on last-minute deals where such actions might be limited.
Available Tools
6 toolsbook_from_itineraryARead-onlyIdempotentInspect
Convert a travel itinerary into real bookings. Accepts raw itinerary text (natural language, bullet points, or structured), extracts destinations and activity mentions, and matches them against live inventory. Returns booking page URLs for each matched activity. Use this when a user has an itinerary and wants to book the activities they can. Not all items will match — the response shows which matched and which didn't.
| Name | Required | Description | Default |
|---|---|---|---|
| itinerary | Yes | Raw itinerary text. Can be natural language, bullet points, or structured. Must mention at least one destination (city or country name). Example: '3 days in Iceland: glacier hike, northern lights tour, horse riding' |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond what annotations provide: it explains the matching process ('extracts destinations and activity mentions, matches them against live inventory'), discloses partial matching behavior ('Not all items will match'), and describes the response format ('shows which matched and which didn't'). While annotations cover safety (readOnly, non-destructive), the description provides operational transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly structured and concise: the first sentence states the core function, the second explains the process, and the third provides usage guidelines. Every sentence earns its place with no wasted words, and key information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (itinerary parsing and matching), the description provides good context about the conversion process and partial matching behavior. While there's no output schema, the description explains what the response contains. The annotations cover safety aspects well, making this description appropriately complete for its purpose.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already fully documents the single required parameter. The description adds minimal semantic context by mentioning format flexibility ('natural language, bullet points, or structured') but doesn't provide additional parameter guidance beyond what's in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('convert', 'extracts', 'matches') and resources ('travel itinerary', 'real bookings', 'booking page URLs'). It distinguishes itself from siblings by focusing on itinerary conversion rather than slot booking, status checking, or supplier information retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('when a user has an itinerary and wants to book the activities they can') and includes important exclusions ('Not all items will match'). This clearly differentiates it from sibling tools like book_slot (for specific bookings) or search_slots (for general availability searches).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
book_slotAInspect
Book a last-minute slot for a customer. Two modes: (1) APPROVAL MODE (default): creates a Stripe Checkout Session and returns a checkout_url — you MUST share this URL with the customer immediately so they can complete payment. Booking is confirmed with the supplier after payment. (2) AUTONOMOUS MODE: if you supply a wallet_id (pre-funded agent wallet) and execution_mode='autonomous', the booking completes immediately and returns a confirmation_number directly — no checkout step, no human action required. Use autonomous mode when your application manages payment on behalf of the customer. Bookings are real and go directly to the supplier.
| Name | Required | Description | Default |
|---|---|---|---|
| slot_id | Yes | Slot ID from search_slots results. Required. | |
| quantity | No | Number of people to book. Default: 1. Price is per person × quantity. | |
| wallet_id | No | Pre-funded agent wallet ID (format: wlt_...). Provide this to enable autonomous mode. | |
| customer_name | Yes | Full name of the person attending the experience. | |
| customer_email | Yes | Email address where booking confirmation will be sent. | |
| customer_phone | Yes | Phone number including country code (e.g. +15550001234). | |
| execution_mode | No | Set to 'autonomous' when providing a wallet_id. Omit for standard approval (checkout URL) flow. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a non-readOnly, non-destructive, non-idempotent operation, which aligns with the description's 'Book' action. The description adds valuable behavioral context beyond annotations: it discloses that it creates a Stripe Checkout Session, triggers email confirmation, confirms with supplier after payment, and specifies bookings are real, which helps the agent understand side effects and workflow.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose and efficiently details the workflow in four concise sentences, each adding critical information (e.g., Stripe integration, payment flow, email confirmation, booking reality) without redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (payment integration, email notifications, supplier coordination) and lack of output schema, the description does a good job explaining the return value (checkout_url) and post-payment steps. However, it could be more complete by mentioning error handling or what happens if payment fails, which would help the agent anticipate edge cases.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with all parameters well-documented in the schema. The description does not add any parameter-specific details beyond what the schema provides, such as format examples or constraints for customer data. Baseline score of 3 is appropriate as the schema carries the full burden.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Book a last-minute slot'), identifies the resource (slot for a customer), and distinguishes from sibling tools by specifying it creates a Stripe Checkout Session and handles payment flow, unlike search_slots (which finds slots) or get_booking_status (which checks status).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context ('last-minute slot') and mentions directing customers to a checkout URL, but does not explicitly state when to use this tool versus alternatives like search_slots or what prerequisites are needed (e.g., slot_id from search_slots). It provides clear operational guidance but lacks explicit comparison with siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_booking_statusARead-onlyIdempotentInspect
Check the status of a booking by booking_id. Returns status (pending, confirmed, failed, or cancelled), confirmation number, service details, and price charged.
| Name | Required | Description | Default |
|---|---|---|---|
| booking_id | Yes | The booking_id string returned by book_slot (format: bk_...). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already cover key traits (read-only, non-destructive, idempotent, closed-world), so the bar is lower. The description adds valuable context by detailing the return values (status types, confirmation number, etc.), which helps the agent understand what to expect beyond the safety profile. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the purpose and includes all necessary return details without waste. Every part (action, parameter, output) earns its place, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 parameter, no output schema) and rich annotations, the description is mostly complete. It covers purpose, usage hint, and return values. However, it lacks explicit guidance on error cases or prerequisites (e.g., valid booking_id format), leaving minor gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the parameter 'booking_id' fully documented in the schema (including format 'bk_...'). The description adds no additional parameter details beyond what the schema provides, so it meets the baseline for high schema coverage without compensating further.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Check the status of a booking') and resource ('by booking_id'), distinguishing it from siblings like 'book_slot' (creation) and 'search_slots' (searching). It explicitly lists the returned data fields (status, confirmation number, etc.), making the purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying 'booking_id string returned by book_slot', suggesting it should be used after a booking is made. However, it does not explicitly state when not to use this tool or name alternatives (e.g., vs. 'search_slots' for broader queries), which prevents a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_supplier_infoARead-onlyIdempotentInspect
Returns information about the supplier network: available destinations, experience categories, booking platforms, and protocol details. Call this before search_slots to understand what regions and activity types are available.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide comprehensive behavioral hints (read-only, closed-world, idempotent, non-destructive). The description adds valuable context about the tool's role in the workflow (prerequisite for search_slots) and the type of metadata returned, which goes beyond what annotations convey.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly structured with two focused sentences: the first explains what the tool returns, the second provides crucial usage guidance. Every word serves a purpose with zero redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's zero-parameter nature, comprehensive annotations, and clear sibling relationships, the description provides excellent context. The only minor gap is lack of output format details (no output schema exists), but the description adequately explains the content categories returned.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0 parameters and 100% schema description coverage, the baseline would be 4. The description appropriately explains that this tool takes no parameters and instead returns system-wide configuration information, which adds meaningful context beyond the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('returns information') and resources ('supplier network'), listing concrete data types (destinations, categories, platforms, protocols). It explicitly distinguishes from sibling 'search_slots' by explaining this provides foundational context for that tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('Call this before search_slots') and why ('to understand what regions and activity types are available'). It clearly positions this as a prerequisite information-gathering step before using the sibling search tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
preview_slotARead-onlyIdempotentInspect
Get a shareable booking page URL for a slot. Returns a link the user can open in their browser to see full details and complete the booking themselves. Use this instead of book_slot when the user is a human who will pay directly — they enter their own name, email, and phone on the page and pay via Stripe. No need to collect customer details yourself.
| Name | Required | Description | Default |
|---|---|---|---|
| slot_id | Yes | Slot ID from search_slots results. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it explains that the link allows users to enter their own details and pay via Stripe, and that customer details aren't collected by the agent. Annotations already cover read-only, non-destructive, and idempotent aspects, so the description appropriately supplements with user workflow details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly front-loaded with the core purpose in the first sentence, followed by usage guidance and behavioral context in subsequent sentences. Every sentence earns its place with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 parameter, no output schema) and rich annotations, the description provides complete context: it explains what the tool does, when to use it, how it differs from alternatives, and key behavioral aspects like payment method and customer data handling.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the slot_id parameter fully. The description doesn't add any parameter-specific details beyond what the schema provides, which is adequate given the high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Get a shareable booking page URL') and resource ('for a slot'), distinguishing it from sibling tools like book_slot by explaining it returns a link for user self-booking rather than directly booking.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use this tool ('Use this instead of book_slot when the user is a human who will pay directly') and provides clear exclusions ('No need to collect customer details yourself'), offering direct comparison with an alternative sibling tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_slotsARead-onlyIdempotentInspect
Search available last-minute tours, activities, and experiences worldwide. Queries live production inventory from 42 suppliers across Iceland, Italy, Egypt, Japan, Morocco, Portugal, Tanzania, Finland, Montenegro, Romania, Turkey, USA, UK, China, Mexico, Costa Rica, and Brazil via the OCTO booking standard. Results sorted by urgency (soonest first). Call this first when a user asks about tours. Follow up with preview_slot for a booking link or book_slot to book directly.
| Name | Required | Description | Default |
|---|---|---|---|
| city | No | City or country filter, partial match (e.g. 'Rome', 'Iceland'). Leave empty for all locations. | |
| category | No | Category filter (e.g. 'experiences'). Leave empty for all. | |
| max_price | No | Maximum price in USD. Omit or set to 0 for all prices. | |
| hours_ahead | No | Return slots starting within this many hours. Default: 168 (1 week). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it specifies the urgency-based sorting ('soonest first'), lists the specific providers/suppliers supported, and mentions the 'OCTO open booking protocol' as the data source. Annotations already cover read-only, open-world, idempotent, and non-destructive traits, but the description provides operational details that enhance understanding without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: the first sentence clearly states the core purpose, followed by provider details and sorting behavior. The list of providers is somewhat lengthy but relevant for transparency. Every sentence contributes useful information, though it could be slightly more streamlined by grouping providers more concisely.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (search with multiple filters), rich annotations (covering safety and idempotency), and full schema coverage, the description provides good contextual completeness. It explains the data sources, sorting behavior, and scope (last-minute availability). The main gap is the lack of output schema, but the description doesn't need to detail return values extensively since annotations imply a read-only list operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema fully documents all 5 parameters. The description doesn't add any parameter-specific details beyond what's in the schema (e.g., it doesn't explain 'city' filtering further or provide examples for 'category'). The baseline score of 3 reflects adequate parameter documentation through the schema alone, with no extra value from the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('search for last-minute available tours and activities') and resources (tours/activities from specific providers via OCTO protocol). It distinguishes itself from sibling tools like 'book_slot' (which would book rather than search) and 'get_booking_status' (which checks existing bookings rather than finding available slots).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: searching for last-minute available tours/activities with urgency-based sorting. However, it doesn't explicitly state when NOT to use it or mention alternatives like 'get_supplier_info' for supplier details instead of slot availability. The context is clear but lacks explicit exclusions or sibling tool comparisons.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.