Skip to main content
Glama

booking

Server Details

Book premium private chauffeur transfers in Paris and across Europe via AI agents.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.2/5 across 6 of 6 tools scored. Lowest: 3.4/5.

Server CoherenceA
Disambiguation5/5

Each tool has a distinct and well-defined purpose with no overlap: get_quote calculates prices, book_ride creates bookings, get_service_info provides service details, get_vehicles lists vehicle options, resolve_flight resolves flight details, and resolve_location geocodes addresses. The descriptions clearly differentiate their roles in the booking workflow.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern with snake_case, such as get_quote, book_ride, resolve_flight, and resolve_location. This uniformity makes the toolset predictable and easy to understand for agents.

Tool Count5/5

With 6 tools, the set is well-scoped for a booking server, covering essential steps like quoting, booking, vehicle selection, service info, and data resolution (flight and location). Each tool serves a necessary function without redundancy or bloat.

Completeness4/5

The toolset covers the core booking workflow comprehensively, including quote generation, booking creation, vehicle listing, service information, and data resolution. A minor gap is the lack of tools for managing or canceling existing bookings, but agents can still complete the primary booking process effectively.

Available Tools

6 tools
book_rideAInspect

Create a secure payment link to book a private chauffeur transfer. The reservation is created automatically ONLY after the customer completes payment. REQUIRED: call get_quote first and pass the resulting quote_id + the chosen vehicle_id. The price is taken from the stored quote (not from any price field you pass). Pass idempotency_key to make retries safe: the same key returns the same payment link instead of creating a duplicate.

ParametersJSON Schema
NameRequiredDescriptionDefault
bagsNoNumber of bags
channelNoCaller channel identifier for observability. If omitted, the channel stored with the quote (from get_quote) is used.
quote_idYesquote_id returned by get_quote. Must be passed. Expired quote (>15 min) requires a fresh get_quote.
passengersNoNumber of passengers
vehicle_idYesVehicle ID chosen from the get_quote results
flight_numberNoFlight number for airport pickups (e.g. AF123)
passenger_nameYesFull name of the passenger
idempotency_keyNoOptional agent-generated key (e.g. UUID) to deduplicate retries. Same key + same params returns the same payment link for 24 h. Same key + different params returns an IDEMPOTENCY_CONFLICT error.
passenger_emailYesPassenger email for booking confirmation
passenger_phoneYesPassenger phone with country code (e.g. +33612345678)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: the reservation is created automatically only after payment completion, price is taken from the stored quote, and idempotency_key usage for safe retries. However, it doesn't cover all potential behavioral aspects like error handling or response format, leaving some gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by critical usage notes and behavioral details. Every sentence adds essential information—no wasted words. It efficiently covers prerequisites, payment flow, and idempotency in a compact format.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a booking tool with 10 parameters and no output schema, the description does well by explaining the payment flow, prerequisites, and idempotency. However, it lacks details on the response format (e.g., what the payment link looks like) and error scenarios, which could be important for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal semantic value beyond the schema, such as emphasizing that quote_id and vehicle_id come from get_quote and that price is from the stored quote. This meets the baseline of 3 when schema coverage is high.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Create a secure payment link to book a private chauffeur transfer.' It specifies the action (create payment link) and resource (private chauffeur transfer booking), distinguishing it from sibling tools like get_quote (which provides quotes) or get_vehicles (which lists vehicles).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines: 'REQUIRED: call get_quote first and pass the resulting quote_id + the chosen vehicle_id.' It also specifies when not to use it (e.g., without a valid quote) and mentions alternatives implicitly by referencing get_quote as a prerequisite, though it doesn't explicitly list all sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_quoteAInspect

Calculate the exact price for a private chauffeur transfer. Returns prices for all available vehicles AND a quote_id valid for 15 minutes. Always call this before book_ride and pass the quote_id to book_ride — it locks the price and prevents drift. IMPORTANT: pickup_date must be in dd/mm/yyyy format.

ParametersJSON Schema
NameRequiredDescriptionDefault
channelNoCaller channel identifier for observability: 'chatbot', 'whatsapp', 'claude-desktop', 'email-agent', or your agent name. Defaults to 'mcp-direct'.
form_typeNoBooking typeone_way
num_hoursNoNumber of hours (required for hourly, min 3)
pickup_dateYesPickup date in dd/mm/yyyy format (e.g. '27/03/2026')
pickup_timeYesPickup time in HH:MM format (e.g. '14:00')
step_addressNoIntermediate stop address (optional)
pickup_addressYesFull pickup address (e.g. 'CDG Terminal 2E' or street address)
dropoff_addressYesFull drop-off address
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the quote_id validity period ('valid for 15 minutes'), the price-locking mechanism ('locks the price and prevents drift'), and the mandatory call sequence. However, it doesn't mention error conditions, rate limits, or authentication requirements, leaving some gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with zero wasted sentences. It front-loads the core purpose, immediately follows with critical behavioral information (quote_id validity and book_ride relationship), and ends with the most important parameter constraint. Every sentence earns its place by providing essential guidance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 8 parameters, no annotations, and no output schema, the description provides strong contextual completeness regarding purpose, usage sequence, and key behavioral constraints. However, without an output schema, it doesn't describe the return structure beyond mentioning 'prices for all available vehicles' and 'quote_id', leaving the exact response format unspecified.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 8 parameters thoroughly. The description adds minimal parameter semantics beyond the schema - only reinforcing the format requirement for pickup_date ('must be in dd/mm/yyyy format'), which is already stated in the schema. This meets the baseline expectation when schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('calculate', 'returns') and resources ('price for a private chauffeur transfer', 'prices for all available vehicles', 'quote_id'). It distinguishes from siblings by explicitly mentioning the relationship with book_ride and contrasting with get_vehicles (which presumably lists vehicles without pricing).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines: 'Always call this before book_ride' establishes a mandatory sequence, 'pass the quote_id to book_ride' specifies the required parameter flow, and 'it locks the price and prevents drift' explains the functional benefit. This clearly differentiates when to use this tool versus alternatives like book_ride or get_vehicles.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_service_infoAInspect

Get information about MyDriverParis services, coverage areas, airports served, and policies. Use this to answer customer questions.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It states the tool gets information, implying a read-only operation, but doesn't disclose behavioral traits like authentication requirements, rate limits, error conditions, or response format. The description is minimal and lacks essential operational context for a tool with no structured annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded. The first sentence states the purpose, and the second provides usage context. Both sentences earn their place with zero wasted words, making it efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, no output schema, no annotations), the description is complete enough for basic understanding. It covers purpose and usage context. However, without annotations or output schema, it lacks details on response format or behavioral constraints, which could be important for an AI agent to use it effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema description coverage (empty schema). The description adds no parameter information, which is appropriate since there are no parameters. Baseline for 0 parameters is 4, as the description doesn't need to compensate for missing param details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Get') and resource ('information about MyDriverParis services, coverage areas, airports served, and policies'). It distinguishes from siblings by focusing on informational retrieval rather than booking, quoting, or resolving operations. However, it doesn't explicitly differentiate from potential informational siblings like 'get_vehicles'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for usage: 'Use this to answer customer questions.' This gives a specific scenario when to use the tool. It doesn't explicitly state when not to use it or name alternatives among siblings, but the context is sufficiently clear for a read-only informational tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_vehiclesBInspect

List all available vehicles with capacity and base rates. Use this to help the customer choose the right vehicle for their trip.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. While it indicates this is a read operation ('List'), it doesn't specify whether this requires authentication, rate limits, pagination behavior, or what the return format looks like (e.g., JSON structure). For a tool with zero annotation coverage, this leaves significant gaps in understanding how the tool behaves beyond its basic purpose.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and front-loaded, consisting of two sentences that efficiently convey the tool's purpose and usage. The first sentence states what it does, and the second provides context for when to use it, with no wasted words or redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (0 parameters, no output schema, no annotations), the description is minimally adequate. It covers the purpose and basic usage but lacks details on behavioral aspects like authentication needs or return format. Without annotations or an output schema, the description should ideally provide more context on what the output looks like (e.g., 'returns a list of vehicles with fields for capacity and base rates') to be fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, meaning there are no parameters to document. The description appropriately doesn't discuss parameters, which is correct for a parameterless tool. It adds value by clarifying the tool's output includes 'capacity and base rates,' which isn't captured in the schema since there's no output schema provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('List') and resource ('all available vehicles'), and includes additional details about what information is provided ('capacity and base rates'). It distinguishes itself from siblings like 'book_ride' or 'get_quote' by focusing on vehicle listing rather than booking or pricing. However, it doesn't explicitly differentiate from potential similar listing tools that might exist in other contexts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage guidance by stating 'Use this to help the customer choose the right vehicle for their trip,' which suggests this tool is for pre-booking vehicle selection. However, it doesn't explicitly state when to use this versus alternatives like 'get_quote' (which might provide pricing) or 'get_service_info' (which might provide broader service details), nor does it mention any exclusions or prerequisites for use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

resolve_flightAInspect

Resolve a flight number to its airports, terminals, scheduled times, and status. Use BEFORE book_ride whenever the customer provides a flight number so that pickup time and terminal are correct. Input is tolerant: accepts 'AF007', 'AF 007', 'AF-007', 'af7', 'AFR007' — the tool normalizes internally. Returns an array of matching operations (usually 1 when direction is set).

ParametersJSON Schema
NameRequiredDescriptionDefault
dateYesDate in YYYY-MM-DD (NOT dd/mm/yyyy). This is the arrival date for pickups, departure date for dropoffs.
directionNo'arrival' (default) when picking the passenger up at an airport, 'departure' when dropping off for a flight.arrival
flight_numberYesFlight number in any format: 'AF007', 'AF 007', 'af-7', 'AFR007'. Server normalizes.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: input tolerance (accepts various flight number formats with internal normalization), output format (returns an array of matching operations), and typical result (usually 1 match when direction is set). However, it lacks details on error handling or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences: purpose, usage guideline, and behavioral details. Each sentence adds critical information without redundancy, making it front-loaded and easy to parse for an AI agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters, no output schema, no annotations), the description is mostly complete. It covers purpose, usage, input behavior, and output format. However, it lacks explicit error scenarios or authentication requirements, which could be relevant for a flight resolution tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds value by explaining input tolerance for 'flight_number' ('accepts 'AF007', 'AF 007', 'AF-007', 'af7', 'AFR007' — the tool normalizes internally'), which goes beyond the schema's generic description. It also clarifies the typical output behavior related to parameters, though it doesn't detail all parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Resolve a flight number to its airports, terminals, scheduled times, and status.' It specifies the verb ('resolve') and resource ('flight number'), and distinguishes it from siblings like 'book_ride' by focusing on flight information retrieval rather than booking or other services.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'Use BEFORE book_ride whenever the customer provides a flight number so that pickup time and terminal are correct.' It names a specific alternative ('book_ride') and clarifies the context (pre-booking for accurate pickup details).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

resolve_locationAInspect

Resolve a free-text location (city, airport, hotel, station, full address) to a canonical address + coordinates, using the same geocoder that Zeplan consumes for bookings. Use this BEFORE get_quote when the customer gives a fuzzy pickup/drop-off (e.g. 'CDG', 'Le Bristol', 'Gare du Nord', raw GPS). Returns up to 5 candidates with place_id, full address, type hint, and optionally coordinates.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesFree-text location. Examples: 'CDG', 'Charles de Gaulle Terminal 2E', 'Hotel Le Bristol Paris', 'Gare du Nord', '15 rue de la Paix Paris', '48.8566, 2.3522'.
include_coordsNoIf true, also fetch GPS coords for the first result (extra API call). Defaults to false — request only when the agent has picked the right candidate.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: it uses 'the same geocoder that Zeplan consumes for bookings' (context on data source), returns 'up to 5 candidates' (output limit), and mentions optional coordinates with an 'extra API call' (performance implication). However, it lacks details on error handling or rate limits, preventing a perfect score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by usage guidelines and output details. Each sentence adds essential information—such as workflow placement, examples, and behavioral traits—with zero waste, making it efficient and well-structured for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description does a strong job covering purpose, usage, and key behaviors. It explains the output format ('place_id, full address, type hint, and optionally coordinates') and limits ('up to 5 candidates'), compensating for the lack of structured fields. However, it omits details like error cases or authentication needs, leaving minor gaps in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description adds value by explaining the tool's purpose with 'free-text location' and providing context for 'include_coords' ('extra API call' and 'request only when the agent has picked the right candidate'), enhancing understanding beyond the schema. Since parameters are documented, baseline is 3, but the added context justifies a higher score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('resolve'), resource ('free-text location'), and transformation ('to a canonical address + coordinates'). It explicitly distinguishes from sibling tools by stating 'Use this BEFORE get_quote' and provides concrete examples of fuzzy inputs like 'CDG', 'Le Bristol', and 'Gare du Nord', making the purpose unambiguous and distinct.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('BEFORE get_quote when the customer gives a fuzzy pickup/drop-off') and includes examples of appropriate inputs. It also distinguishes from alternatives by specifying its role in the workflow relative to 'get_quote', offering clear context for selection without misleading exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources