Skip to main content
Glama
Ownership verified

Server Details

Search and book last-minute tours and activities in 28 cities from 21 suppliers.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.4/5 across 6 of 6 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no overlap: book_from_itinerary converts itineraries to bookings, book_slot handles booking creation, get_booking_status checks status, get_supplier_info provides supplier details, preview_slot generates shareable booking pages, and search_slots finds available activities. The descriptions explicitly differentiate use cases, such as preview_slot for human users versus book_slot for autonomous payments, eliminating any ambiguity.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern (e.g., book_from_itinerary, get_booking_status, search_slots) with no deviations in style or casing. This predictability makes it easy for agents to understand and select tools based on their actions and targets, enhancing usability across the set.

Tool Count5/5

With 6 tools, the server is well-scoped for its purpose of last-minute tours and activities booking. Each tool serves a specific role in the workflow (e.g., search, preview, book, check status), and none feel redundant or missing, providing a balanced and efficient toolset for the domain.

Completeness5/5

The toolset offers complete coverage for the booking lifecycle: search_slots for discovery, preview_slot and book_slot for booking creation, get_booking_status for tracking, book_from_itinerary for itinerary-based bookings, and get_supplier_info for context. There are no obvious gaps, as all essential operations from search to confirmation are included, ensuring agents can handle end-to-end workflows without dead ends.

Available Tools

6 tools
book_from_itineraryA
Read-onlyIdempotent
Inspect

Convert a travel itinerary into real bookings. Accepts raw itinerary text (natural language, bullet points, or structured), extracts destinations and activity mentions, and matches them against live inventory. Returns booking page URLs for each matched activity. Use this when a user has an itinerary and wants to book the activities they can. Not all items will match — the response shows which matched and which didn't.

ParametersJSON Schema
NameRequiredDescriptionDefault
itineraryYesRaw itinerary text. Can be natural language, bullet points, or structured. Must mention at least one destination (city or country name). Example: '3 days in Iceland: glacier hike, northern lights tour, horse riding'
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it explains the matching process ('extracts destinations and activity mentions, matches them against live inventory'), discloses partial matching behavior ('Not all items will match'), and describes the response format ('shows which matched and which didn't'). While annotations cover safety (readOnly, non-destructive, idempotent), the description provides operational details that help the agent understand how the tool behaves.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with three sentences that each serve distinct purposes: stating the core function, explaining the process, and providing usage guidelines. There's no wasted text, and key information is front-loaded in the first sentence.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with comprehensive annotations (readOnly, openWorld, idempotent, non-destructive) but no output schema, the description provides good context about the matching process, partial results, and response format. It could be slightly more complete by explicitly mentioning authentication needs or rate limits, but it covers the essential operational behavior well.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already fully documents the single parameter. The description mentions 'raw itinerary text' and the extraction process but doesn't add significant semantic detail beyond what the schema provides. The baseline score of 3 reflects adequate but not enhanced parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('convert', 'extracts', 'matches') and resources ('travel itinerary', 'real bookings', 'booking page URLs'). It distinguishes from siblings by focusing on itinerary conversion rather than direct booking (book_slot), status checking (get_booking_status), or slot searching (search_slots).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'Use this when a user has an itinerary and wants to book the activities they can.' It also sets expectations about partial matching ('Not all items will match') and distinguishes from alternatives by implying this is for itinerary-based booking rather than direct slot booking or status queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

book_slotAInspect

Book a last-minute slot for a customer. Two modes: (1) APPROVAL MODE (default): creates a Stripe Checkout Session and returns a checkout_url — you MUST share this URL with the customer immediately so they can complete payment. Booking is confirmed with the supplier after payment. (2) AUTONOMOUS MODE: if you supply a wallet_id (pre-funded agent wallet) and execution_mode='autonomous', the booking completes immediately and returns a confirmation_number directly — no checkout step, no human action required. Use autonomous mode when your application manages payment on behalf of the customer. Bookings are real and go directly to the supplier.

ParametersJSON Schema
NameRequiredDescriptionDefault
slot_idYesSlot ID from search_slots results. Required.
quantityNoNumber of people to book. Default: 1. Price is per person × quantity.
wallet_idNoPre-funded agent wallet ID (format: wlt_...). Provide this to enable autonomous mode.
customer_nameYesFull name of the person attending the experience.
customer_emailYesEmail address where booking confirmation will be sent.
customer_phoneYesPhone number including country code (e.g. +15550001234).
execution_modeNoSet to 'autonomous' when providing a wallet_id. Omit for standard approval (checkout URL) flow.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate this is a non-read-only, non-destructive operation, but the description adds valuable behavioral context beyond that. It explains that bookings are real and go directly to the supplier, details the two distinct modes with their outcomes (checkout_url vs. confirmation_number), and specifies human action requirements. No contradictions with annotations exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded, starting with the core purpose and immediately detailing the two modes. Each sentence adds essential information without redundancy, efficiently covering operational flows, requirements, and outcomes in a compact format.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (two modes, real bookings) and lack of output schema, the description does a strong job of explaining what happens in each mode and the resulting outputs. It could slightly improve by explicitly mentioning error cases or confirmation details, but it provides sufficient context for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents all parameters thoroughly. The description adds some semantic context by linking 'wallet_id' and 'execution_mode' to autonomous mode functionality, but it doesn't provide significant additional meaning beyond what the schema states. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Book a last-minute slot') and identifies the resource ('for a customer'). It distinguishes this tool from siblings like 'search_slots' (which finds slots) and 'get_booking_status' (which checks status) by emphasizing the booking action and its two operational modes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use each mode: APPROVAL MODE (default) for scenarios requiring customer payment via Stripe Checkout, and AUTONOMOUS MODE for applications managing payment on behalf of the customer. It clearly differentiates use cases ('when your application manages payment') and mentions prerequisites like supplying 'wallet_id'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_booking_statusA
Read-onlyIdempotent
Inspect

Check the status of a booking by booking_id. Returns status (pending, confirmed, failed, or cancelled), confirmation number, service details, and price charged.

ParametersJSON Schema
NameRequiredDescriptionDefault
booking_idYesThe booking_id string returned by book_slot (format: bk_...).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, non-destructive, and idempotent behavior, which the description doesn't repeat. The description adds value by specifying the return data (status, confirmation number, service details, price charged) and listing possible status values, which provides useful context beyond annotations. No contradictions exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently covers purpose, input, and output without unnecessary details. It's front-loaded with the core action and wastes no words, making it highly concise and effective.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no output schema), rich annotations, and 100% schema coverage, the description is mostly complete. It adds output details not in structured fields. However, it could slightly improve by mentioning idempotency or error cases, but it's sufficient for the context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter fully documented in the schema. The description adds minimal semantics by linking booking_id to 'returned by book_slot' and noting the format 'bk_...', but this is largely redundant with the schema. Baseline 3 is appropriate as the schema carries the burden.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Check') and resource ('status of a booking'), specifies the key input ('by booking_id'), and distinguishes from siblings like book_slot (which creates bookings) and search_slots (which finds available slots). It provides a specific, unambiguous purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by mentioning 'booking_id returned by book_slot', suggesting it's used after a booking is created. However, it doesn't explicitly state when to use this tool versus alternatives like preview_slot or get_supplier_info, nor does it provide exclusion criteria. The guidance is clear but not comprehensive.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_supplier_infoA
Read-onlyIdempotent
Inspect

Returns information about the supplier network: available destinations, experience categories, booking platforms, and protocol details. Call this before search_slots to understand what regions and activity types are available.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds useful context about the tool's role in the workflow (preparing for search_slots) but doesn't provide additional behavioral details like rate limits, authentication needs, or response format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: the first states purpose with specific details, the second provides crucial usage guidance. Every word earns its place, and information is front-loaded appropriately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's zero parameters, rich annotations (covering safety and idempotency), and no output schema, the description provides excellent purpose clarity and usage guidelines. It could slightly improve by hinting at response structure, but it's largely complete for this context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, and schema description coverage is 100% (though empty). The description appropriately doesn't discuss parameters, focusing instead on the tool's purpose and usage context. Baseline for zero parameters is 4, as there's nothing to compensate for.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Returns information') and resources ('supplier network'), listing concrete data types (destinations, experience categories, booking platforms, protocol details). It distinguishes from sibling tools by explicitly contrasting with 'search_slots'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'Call this before search_slots to understand what regions and activity types are available.' This gives clear temporal sequencing (before) and functional relationship (to understand) with a named alternative tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

preview_slotA
Read-onlyIdempotent
Inspect

Get a shareable booking page URL for a slot. Returns a link the user can open in their browser to see full details and complete the booking themselves. Use this instead of book_slot when the user is a human who will pay directly — they enter their own name, email, and phone on the page and pay via Stripe. No need to collect customer details yourself.

ParametersJSON Schema
NameRequiredDescriptionDefault
slot_idYesSlot ID from search_slots results.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond what annotations provide. While annotations already indicate this is a read-only, non-destructive, idempotent operation, the description explains that the tool returns a shareable URL for user self-booking, that payment happens via Stripe, and that customer details are collected on the booking page rather than through the tool. This provides important implementation context without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise and well-structured. It uses three sentences that each earn their place: the first states the core purpose, the second provides usage guidelines and differentiation from siblings, and the third clarifies implementation details. There's zero wasted text, and key information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (single parameter, read-only operation) and comprehensive annotations, the description provides excellent contextual completeness. It explains the tool's purpose, when to use it, how it differs from alternatives, and what happens after invocation. The only minor gap is the lack of output schema, but the description adequately explains what the tool returns ('a link the user can open').

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already fully documents the single required 'slot_id' parameter. The description doesn't add any additional parameter semantics beyond what's in the schema, so it meets the baseline expectation for high schema coverage without providing extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get a shareable booking page URL') and identifies the resource ('for a slot'). It explicitly distinguishes from the sibling 'book_slot' by explaining this returns a link for user self-booking rather than direct booking, providing clear differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives: 'Use this instead of book_slot when the user is a human who will pay directly.' It also specifies when not to use it ('No need to collect customer details yourself'), creating clear boundaries for appropriate usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_slotsA
Read-onlyIdempotent
Inspect

Search available last-minute tours, activities, and experiences worldwide. Queries live production inventory from 42 suppliers across Iceland, Italy, Egypt, Japan, Morocco, Portugal, Tanzania, Finland, Montenegro, Romania, Turkey, USA, UK, China, Mexico, Costa Rica, and Brazil via the OCTO booking standard. Results sorted by urgency (soonest first). Call this first when a user asks about tours. Follow up with preview_slot for a booking link or book_slot to book directly.

ParametersJSON Schema
NameRequiredDescriptionDefault
cityNoCity or country filter, partial match (e.g. 'Rome', 'Iceland'). Leave empty for all locations.
categoryNoCategory filter (e.g. 'experiences'). Leave empty for all.
max_priceNoMaximum price in USD. Omit or set to 0 for all prices.
hours_aheadNoReturn slots starting within this many hours. Default: 168 (1 week).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it specifies the data source ('20 Bokun suppliers'), protocol ('OCTO open booking protocol'), sorting behavior ('sorted by urgency (soonest first)'), and scope ('last-minute available tours and activities'). Annotations cover safety (readOnlyHint, destructiveHint) and idempotency, so the description complements them without contradiction, earning a high score for added context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core purpose in the first sentence. However, the list of 20 supplier names is verbose and could be condensed or omitted without losing essential information, slightly reducing efficiency.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (search with multiple filters), rich annotations (covering safety and idempotency), and no output schema, the description is largely complete: it explains the purpose, usage, behavioral traits, and parameters. The main gap is the lack of output details (e.g., what data is returned), but annotations provide some safety context, so it's not severely incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already fully documents all 4 parameters. The description mentions the parameters ('Use city/category/hours_ahead/max_price to filter') but adds no additional semantic details beyond what's in the schema, such as format examples or constraints, so it meets the baseline of 3 without extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search for last-minute available tours and activities'), the resource ('real inventory from 20 Bokun suppliers'), and distinguishes it from siblings by mentioning a prerequisite call ('Call get_supplier_info first') and implying it's for searching rather than booking (vs. book_slot) or previewing (vs. preview_slot).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides explicit guidance on when to use this tool ('Search for last-minute available tours and activities'), when to use an alternative ('Call get_supplier_info first to see all available destinations'), and context for filtering ('Use city/category/hours_ahead/max_price to filter'), clearly differentiating it from sibling tools like get_supplier_info for destinations and book_slot for booking.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources