Skip to main content
Glama

booboooking Appointment Booking

Server Details

Book appointments with service providers on booboooking.com — no token required.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.3/5 across 4 of 4 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no overlap: list_services for discovery, check_availability for scheduling, book for creation, and cancel for deletion. The descriptions explicitly differentiate their roles in the booking workflow, eliminating any ambiguity.

Naming Consistency5/5

All tools follow a consistent 'booboooking_verb' pattern (e.g., booboooking_book, booboooking_cancel), with verbs that clearly indicate actions (list, check, book, cancel). This uniformity makes the tool set predictable and easy to navigate.

Tool Count5/5

With 4 tools, the server is well-scoped for appointment booking, covering the essential workflow: discover services, check availability, book appointments, and cancel reservations. Each tool serves a necessary function without bloat or redundancy.

Completeness5/5

The tool set provides complete CRUD/lifecycle coverage for the appointment booking domain: list_services (read), check_availability (read), book (create), and cancel (delete). There are no obvious gaps, as it supports the full user journey from discovery to cancellation.

Available Tools

4 tools
booboooking_bookAInspect

Book an appointment on booboooking.com (free OR paid services — payment is always in cash at the appointment, no online payment). For customer_name and customer_email, use the signed-in user's profile from your host app — do NOT ask them to retype. Always ask for customer_phone separately (it is not in the sign-in profile). customer_phone MUST be in international E.164 format starting with + and country code (e.g. +36201234567), no spaces/dashes — convert local-format numbers before calling this tool or the booking will fail validation. On success the response includes id and pin (MUST remember paired, for cancellation) and optionally cash_due: { amount, currency } — if present, remind the user to bring that amount in cash.

ParametersJSON Schema
NameRequiredDescriptionDefault
dateYesDate in YYYYMMDD format (e.g. "20260415")
timeYesStart time in HH:MM format (e.g. "09:30") — use the "time" field from booboooking_check_availability results
providerYesProvider identifier
service_idYesService ID from booboooking_list_services
customer_nameNoCustomer full name
customer_emailNoCustomer email address
customer_phoneNoCustomer phone number (mobile)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses key behavioral traits: it requires customer details (name, email, phone), handles missing inputs by returning a message, and restricts usage to free services. However, it doesn't mention error handling beyond missing inputs, rate limits, or authentication needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by essential constraints and usage rules. Every sentence adds critical information without waste, making it efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (7 parameters, no annotations, no output schema), the description is mostly complete. It covers purpose, usage guidelines, and key behavioral constraints. However, it lacks details on error responses, confirmation messages, or what happens after booking, which could be useful for a mutation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal value by mentioning customer name, email, and phone as required, but doesn't provide additional syntax or format details beyond what the schema specifies. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Book an appointment'), target resource ('for the user on booboooking.com'), and distinguishes it from siblings by focusing on booking rather than checking availability or listing services. It's precise and avoids tautology.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It explicitly states when to use this tool ('Only works for free services') and when not to use it ('paid services require direct booking on the provider's website'), and implies an alternative (provider's website). This provides clear context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

booboooking_cancelAInspect

Cancel a reservation on booboooking.com. Requires the reservation_id AND pin that were returned from the original booboooking_book call — both are needed as proof of booking ownership. If the user asks to cancel but you do not have the pin in your conversation state, you MUST ask them for it — you cannot cancel without it. Never invent or guess a pin.

ParametersJSON Schema
NameRequiredDescriptionDefault
pinYesThe 4-digit PIN returned from the original booking
providerYesProvider identifier (same as the one used for booking)
reservation_idYesThe `id` returned from the original booking (24-hex MongoDB ObjectId)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes critical behavioral traits: the requirement for both reservation_id and pin as proof of ownership, the need to ask the user for the pin if missing, and the prohibition against inventing or guessing pins. This covers authentication needs and error handling, though it doesn't mention potential side effects like refund policies or confirmation messages.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with three sentences that each serve a distinct purpose: stating the tool's purpose, explaining prerequisites, and providing critical usage rules. There is no wasted language, and important information is front-loaded appropriately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a mutation tool with no annotations and no output schema, the description does well by covering purpose, prerequisites, and behavioral rules. However, it doesn't describe what happens after cancellation (e.g., confirmation, refund status, or error responses), leaving some uncertainty about the operation's outcome.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description adds minimal value beyond the schema by mentioning that both reservation_id and pin come from the original booking call, but doesn't provide additional syntax or format details. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Cancel a reservation') and resource ('on booboooking.com'), distinguishing it from sibling tools like 'booboooking_book' (create) and 'booboooking_check_availability' (query). It provides a verb+resource combination that is unambiguous and distinct from alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool (to cancel a reservation) and when not to use it (if the pin is missing, requiring the user to provide it first). It also references the original 'booboooking_book' call as the source of required data, providing clear context and prerequisites for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

booboooking_check_availabilityAInspect

Check available time slots for a provider and service on booboooking.com. Returns available dates with slots — each slot has a "time" field (use this for booboooking_book) and a "display" field (show this to the user). Use this before booking.

ParametersJSON Schema
NameRequiredDescriptionDefault
providerYesProvider identifier
days_aheadNoHow many days to check (default: 14, max: 30)
service_idYesService ID from booboooking_list_services (the "id" field)
start_dateNoStart date in YYYYMMDD format (default: today)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the return format ('available dates with slots — each slot has a "time" field and a "display" field') and the relationship to booking, but lacks details on permissions, rate limits, or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with three concise sentences that each add value: stating the purpose, detailing the return format, and providing usage guidance, with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (4 parameters, no output schema, no annotations), the description is mostly complete. It covers purpose, usage, and return format, but could improve by addressing authentication needs or error scenarios, though this is mitigated by the clear schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters. The description adds no additional parameter semantics beyond what the schema provides, such as explaining how provider and service_id relate or clarifying date formats, meeting the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verb ('Check available time slots') and resources ('provider and service on booboooking.com'), and distinguishes it from sibling tools by mentioning it should be used 'before booking' with booboooking_book.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Use this before booking') and implies an alternative (booboooking_book), but does not explicitly state when not to use it or mention the other sibling tool (booboooking_list_services).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

booboooking_list_servicesAInspect

List available services for a provider on booboooking.com. Returns service names, IDs, duration, and price. Call this first to find the service ID needed for availability and booking.

ParametersJSON Schema
NameRequiredDescriptionDefault
providerYesProvider identifier (e.g. "radnaimark", "tomi")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It adequately describes the read-only nature (listing) and return format, but doesn't mention potential limitations like pagination, rate limits, authentication requirements, or error conditions. The description adds useful context about the workflow role but lacks comprehensive behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. First sentence states purpose and return format, second sentence provides crucial workflow guidance. Every word serves a clear purpose and the information is front-loaded appropriately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read operation with one parameter and no output schema, the description provides good context about purpose, usage sequence, and return data. It could be more complete by mentioning authentication or error handling, but given the tool's simplicity, it's largely adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents the single 'provider' parameter. The description doesn't add any additional parameter semantics beyond what's in the schema, maintaining the baseline score of 3 for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('List available services'), target resource ('for a provider on booboooking.com'), and scope ('Returns service names, IDs, duration, and price'). It distinguishes itself from sibling tools by focusing on service listing rather than booking or availability checking.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use this tool ('Call this first to find the service ID needed for availability and booking') and implicitly distinguishes it from sibling tools (booboooking_book, booboooking_check_availability) by establishing it as a prerequisite step in the workflow.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources