Last Minute Deals HQ
Server Details
Real-time last-minute tour and activity inventory. Search and book available slots across Iceland, Italy, Morocco, Portugal, Edinburgh and more. Powered by OCTO (Ventrata, Bokun, Zaui, Peek Pro).
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 4 of 4 tools scored.
Each tool has a clearly distinct purpose with no overlap: book_slot handles booking creation, get_booking_status checks status, get_supplier_info provides supplier details, and search_slots finds available inventory. The descriptions clearly differentiate their functions, eliminating any ambiguity.
All tool names follow a consistent verb_noun pattern (e.g., book_slot, get_booking_status, get_supplier_info, search_slots), using snake_case throughout. This predictability makes the set easy to navigate and understand.
With 4 tools, this server is well-scoped for its purpose of last-minute bookings. Each tool serves a distinct role in the workflow (search, book, check status, get info), and there are no redundant or missing tools, making the count appropriate.
The tool set provides complete coverage for the last-minute booking domain: search_slots finds options, book_slot creates bookings, get_booking_status tracks them, and get_supplier_info offers context. This covers the core lifecycle from discovery to confirmation without gaps.
Available Tools
6 toolsbook_from_itineraryARead-onlyIdempotentInspect
Convert a travel itinerary into real bookings. Accepts raw itinerary text (natural language, bullet points, or structured), extracts destinations and activity mentions, and matches them against live inventory. Returns booking page URLs for each matched activity. Use this when a user has an itinerary and wants to book the activities they can. Not all items will match — the response shows which matched and which didn't.
| Name | Required | Description | Default |
|---|---|---|---|
| itinerary | Yes | Raw itinerary text. Can be natural language, bullet points, or structured. Must mention at least one destination (city or country name). Example: '3 days in Iceland: glacier hike, northern lights tour, horse riding' |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it explains the matching process ('extracts destinations and activity mentions', 'matches them against live inventory'), the partial success nature ('Not all items will match'), and the response format ('Returns booking page URLs for each matched activity', 'response shows which matched and which didn't'). Annotations cover safety (readOnlyHint=true, destructiveHint=false) and idempotency, but the description enriches this with operational details. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by operational details and usage guidelines. Each sentence adds distinct value: the first explains the conversion process, the second details input handling, the third specifies output, and the fourth provides usage context. There is no redundant or wasted text, making it highly efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (itinerary parsing and matching), the description is mostly complete: it covers purpose, input handling, output format, and usage guidelines. Annotations provide safety and idempotency context, but the lack of an output schema means the description must explain return values, which it does adequately. A minor gap is the absence of details on error handling or inventory sources, but overall it's sufficient for agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description mentions the parameter ('Accepts raw itinerary text') and adds context about acceptable formats ('natural language, bullet points, or structured') and requirements ('must mention at least one destination'). However, the input schema already provides 100% coverage with a detailed description including examples, so the description adds only marginal semantic value. This meets the baseline of 3 for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Convert a travel itinerary into real bookings' with specific verbs ('extracts destinations and activity mentions', 'matches them against live inventory', 'Returns booking page URLs'). It distinguishes from siblings like 'book_slot' (which likely books specific slots) or 'search_slots' (which searches without booking) by focusing on itinerary-based conversion rather than direct booking or searching.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: 'Use this when a user has an itinerary and wants to book the activities they can.' It also provides exclusion guidance: 'Not all items will match — the response shows which matched and which didn't,' indicating it's for partial matching rather than guaranteed bookings. This clearly differentiates it from alternatives like 'book_slot' (for confirmed bookings) or 'search_slots' (for inventory searching).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
book_slotAInspect
Book a last-minute slot for a customer. Two modes: (1) APPROVAL MODE (default): creates a Stripe Checkout Session and returns a checkout_url — you MUST share this URL with the customer immediately so they can complete payment. Booking is confirmed with the supplier after payment. (2) AUTONOMOUS MODE: if you supply a wallet_id (pre-funded agent wallet) and execution_mode='autonomous', the booking completes immediately and returns a confirmation_number directly — no checkout step, no human action required. Use autonomous mode when your application manages payment on behalf of the customer. Bookings are real and go directly to the supplier.
| Name | Required | Description | Default |
|---|---|---|---|
| slot_id | Yes | Slot ID from search_slots results. Required. | |
| quantity | No | Number of people to book. Default: 1. Price is per person × quantity. | |
| wallet_id | No | Pre-funded agent wallet ID (format: wlt_...). Provide this to enable autonomous mode. | |
| customer_name | Yes | Full name of the person attending the experience. | |
| customer_email | Yes | Email address where booking confirmation will be sent. | |
| customer_phone | Yes | Phone number including country code (e.g. +15550001234). | |
| execution_mode | No | Set to 'autonomous' when providing a wallet_id. Omit for standard approval (checkout URL) flow. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover basic hints (non-readOnly, non-destructive, etc.), but the description adds valuable behavioral context: creates Stripe Checkout Session, booking confirmation occurs after payment success, supplier confirmation happens post-payment, and email confirmation is sent. This goes beyond what annotations provide about the tool's workflow.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with five concise sentences that each add value: states purpose, explains Stripe integration, provides user guidance, describes confirmation workflow, and clarifies booking reality. No wasted words or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's transactional nature (booking with payment), no output schema, and rich annotations, the description provides strong context about the Stripe integration, confirmation flow, and email outcome. It could slightly improve by mentioning error cases or response format, but covers most essential aspects well.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, providing full parameter documentation. The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline of 3 without compensating or enhancing parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Book a last-minute slot'), identifies the resource ('customer'), and distinguishes from siblings by mentioning Stripe Checkout Session creation and email confirmation, which none of the sibling tools (get_booking_status, get_supplier_info, search_slots) do.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context ('last-minute slot') and implies usage after search_slots (via slot_id reference), but doesn't explicitly state when NOT to use this tool or name alternatives. It mentions directing customers to checkout_url, which offers practical guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_booking_statusARead-onlyIdempotentInspect
Check the status of a booking by booking_id. Returns status (pending, confirmed, failed, or cancelled), confirmation number, service details, and price charged.
| Name | Required | Description | Default |
|---|---|---|---|
| booking_id | Yes | The booking_id string returned by book_slot (format: bk_...). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover read-only, non-destructive, and idempotent behavior, but the description adds valuable context: it discloses the possible status values (pending, confirmed, failed, cancelled) and the specific data returned (confirmation number, service details, price charged), which are not captured in annotations. This enhances transparency beyond the structured hints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that front-loads the purpose and efficiently lists return details. Every part earns its place with no wasted words, making it highly concise and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (one parameter), rich annotations, and no output schema, the description is mostly complete: it explains the purpose, usage, and return values. However, it could slightly improve by mentioning error cases or prerequisites, but it adequately covers the essentials for a read-only lookup tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the parameter 'booking_id' fully documented in the schema. The description adds no additional meaning beyond what the schema provides (e.g., it doesn't explain format or constraints further), so it meets the baseline for high schema coverage without compensating value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Check the status of a booking') and resource ('booking by booking_id'), distinguishing it from siblings like 'book_slot' (creation) and 'search_slots' (searching). It explicitly mentions the return data, making the purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying 'booking_id' as input and referencing 'book_slot' as the source of this ID, but it does not explicitly state when to use this tool versus alternatives like 'search_slots' or provide exclusions. The guidance is clear but lacks explicit alternatives or when-not scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_supplier_infoARead-onlyIdempotentInspect
Returns information about the supplier network: available destinations, experience categories, booking platforms, and protocol details. Call this before search_slots to understand what regions and activity types are available.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare this as read-only, non-destructive, and idempotent. The description adds valuable context about the tool's role in the workflow (prerequisite for search_slots) and the scope of information returned (network metadata). No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two well-structured sentences: first states what the tool returns with specific examples, second provides clear usage guidance. Every word earns its place with zero redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a parameterless read-only tool with comprehensive annotations, the description provides excellent workflow context and purpose explanation. The only minor gap is lack of output format details (no output schema exists), but the description adequately conveys the information scope.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0 parameters and 100% schema coverage, the baseline would be 4. The description appropriately explains this is a parameterless discovery call that returns supplier network metadata, which adds semantic context beyond the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Returns information') and resources ('supplier network'), listing concrete data types (destinations, categories, platforms, protocol details). It explicitly distinguishes from sibling 'search_slots' by stating this should be called before it.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('Call this before search_slots') and why ('to understand what regions and activity types are available'). It clearly positions this as a prerequisite discovery tool versus the search functionality.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
preview_slotARead-onlyIdempotentInspect
Get a shareable booking page URL for a slot. Returns a link the user can open in their browser to see full details and complete the booking themselves. Use this instead of book_slot when the user is a human who will pay directly — they enter their own name, email, and phone on the page and pay via Stripe. No need to collect customer details yourself.
| Name | Required | Description | Default |
|---|---|---|---|
| slot_id | Yes | Slot ID from search_slots results. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it explains that the tool returns a shareable URL for user self-service booking with payment via Stripe, and that customer details are collected on that page. While annotations cover safety (readOnly, non-destructive, idempotent), the description provides practical implementation details that help the agent understand the user flow.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in three sentences with zero waste: first states the core purpose, second provides usage guidelines with alternatives, third clarifies what not to do. Every sentence earns its place by adding distinct value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter read-only tool with comprehensive annotations, the description provides excellent contextual completeness regarding purpose, usage guidelines, and behavioral context. The only minor gap is the lack of output schema, but the description adequately explains what's returned (a shareable URL).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already fully documents the single required 'slot_id' parameter. The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline expectation without adding extra value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get a shareable booking page URL for a slot') and distinguishes it from sibling tool 'book_slot' by explaining it returns a link for user self-service rather than directly booking. This provides explicit verb+resource differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('when the user is a human who will pay directly') and when not to use it ('No need to collect customer details yourself'), while naming the alternative tool ('Use this instead of book_slot'). This gives clear contextual boundaries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_slotsARead-onlyIdempotentInspect
Search available last-minute tours, activities, and experiences worldwide. Queries live production inventory from 42 suppliers across Iceland, Italy, Egypt, Japan, Morocco, Portugal, Tanzania, Finland, Montenegro, Romania, Turkey, USA, UK, China, Mexico, Costa Rica, and Brazil via the OCTO booking standard. Results sorted by urgency (soonest first). Call this first when a user asks about tours. Follow up with preview_slot for a booking link or book_slot to book directly.
| Name | Required | Description | Default |
|---|---|---|---|
| city | No | City or country filter, partial match (e.g. 'Rome', 'Iceland'). Leave empty for all locations. | |
| category | No | Category filter (e.g. 'experiences'). Leave empty for all. | |
| max_price | No | Maximum price in USD. Omit or set to 0 for all prices. | |
| hours_ahead | No | Return slots starting within this many hours. Default: 168 (1 week). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, open-world, idempotent, and non-destructive behavior. The description adds valuable context beyond this: it specifies the data sources (Bokun, Ventrata, etc.), the protocol (OCTO open booking), and the sorting logic ('urgency, soonest first'), which are not covered by annotations. No contradictions with annotations are present.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by additional context in a second sentence. Each sentence earns its place by specifying data sources, protocol, and sorting, with no wasted words or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (search with multiple filters), rich annotations, and 100% schema coverage, the description is largely complete. It covers purpose, data sources, and behavior. However, without an output schema, it does not describe return values (e.g., format of results), leaving a minor gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, providing clear details for all 5 parameters (e.g., 'city' for partial match, 'limit' with defaults). The description does not add any parameter-specific semantics beyond what the schema already explains, so it meets the baseline of 3 for high schema coverage without extra value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search for last-minute available tours and activities') and resource ('real inventory from Bokun, Ventrata, Zaui, and Peek Pro via the OCTO open booking protocol'), distinguishing it from sibling tools like 'book_slot' (which books) and 'get_booking_status' (which checks status). It explicitly mentions the urgency-based sorting, adding further specificity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for usage ('last-minute available tours and activities', 'sorted by urgency'), implying it's for finding immediate options. However, it does not explicitly state when to use this tool versus alternatives like 'get_supplier_info' or provide exclusions (e.g., not for booking).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!