Teres — Booking for AI Agents
Server Details
Search and book appointments at local businesses. Barbershops, salons, spas, and more.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.6/5 across 10 of 10 tools scored. Lowest: 2.9/5.
Each tool has a clearly distinct purpose with no ambiguity. For example, create_booking, get_booking, cancel_booking, and reschedule_booking cover different aspects of the booking lifecycle, while search_businesses, get_business_info, list_services, get_availability, get_reviews, and submit_review handle distinct discovery and information tasks. The descriptions reinforce these boundaries, making misselection unlikely.
All tool names follow a consistent verb_noun pattern using snake_case, such as create_booking, get_availability, and submit_review. This predictability aids in understanding and usage, with no deviations in style or convention across the set.
With 10 tools, the set is well-scoped for a booking system, covering core workflows from business discovery to booking management and reviews. Each tool earns its place without redundancy, aligning with typical server sizes of 3-15 tools for such a domain.
The tool surface provides complete CRUD/lifecycle coverage for the booking domain. It includes search and discovery (search_businesses, get_business_info), service and availability checks (list_services, get_availability), booking operations (create, get, reschedule, cancel), and review handling (get_reviews, submit_review), with no obvious gaps that would hinder agent workflows.
Available Tools
10 toolscancel_bookingAInspect
Cancel an existing booking. Requires both the confirmation code and customer email. Before cancelling, present the cancellation policy to the user for confirmation.
| Name | Required | Description | Default |
|---|---|---|---|
| customer_email | Yes | The email address used when booking | |
| confirmation_code | Yes | The 8-character confirmation code from the booking |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the requirement for user confirmation before canceling, which adds valuable context about workflow. However, it doesn't disclose other behavioral traits like whether cancellations are reversible, what permissions are needed, rate limits, or what happens after cancellation (e.g., refunds, notifications).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences that each serve a distinct purpose: the first states the tool's function and requirements, the second provides crucial usage guidance. No wasted words, and information is front-loaded appropriately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations and no output schema, the description is adequate but incomplete. It covers the basic purpose and a key workflow step (user confirmation), but lacks information about what happens after cancellation, error conditions, or return values. Given the complexity of a cancellation operation, more behavioral context would be helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with both parameters clearly documented in the schema. The description adds minimal value beyond the schema by emphasizing that both parameters are required ('Requires both the confirmation code and customer email'), but doesn't provide additional semantic context like format examples or business rules.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('cancel') and target resource ('an existing booking'), distinguishing it from siblings like 'reschedule_booking' or 'get_booking'. It specifies the exact operation without ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('Before cancelling, present the cancellation policy to the user for confirmation'), which implies it should be used after user confirmation. However, it doesn't explicitly state when NOT to use it or name alternatives like 'reschedule_booking' for modifying bookings instead of canceling.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_bookingAInspect
Create a new booking/appointment at a business. Requires customer information (name and email) and a selected time slot. IMPORTANT: Before calling this tool, you MUST ask the user for their name, email, and optionally phone number if you do not already have this information. Do not guess or fabricate customer details. Returns a booking confirmation with a unique booking_id.
| Name | Required | Description | Default |
|---|---|---|---|
| notes | No | Optional booking notes | |
| customer | Yes | Customer contact information | |
| staff_id | No | Optional preferred staff member ID | |
| service_id | Yes | The service ID (from list_services) | |
| start_time | Yes | Appointment start time in ISO 8601 UTC (e.g., '2026-04-05T14:00:00Z') | |
| location_id | Yes | The UUID of the location to book with | |
| idempotency_key | Yes | Unique key to prevent duplicate bookings |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the mutation nature ('Create a new booking'), specifies prerequisites (customer information and time slot), and outlines the return value ('booking confirmation with a unique booking_id'), though it lacks details on error handling or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by important usage instructions and return information. It avoids redundancy, but the second sentence could be slightly more concise by integrating the 'IMPORTANT' note more seamlessly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations and no output schema, the description does well by covering purpose, prerequisites, and return value. However, it could improve by mentioning potential side effects (e.g., confirmation emails sent) or error scenarios, given the complexity of 7 parameters including nested objects.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal value beyond the schema by mentioning 'customer information (name and email) and a selected time slot', which aligns with the schema but doesn't provide additional syntax or format details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Create a new booking/appointment'), identifies the target resource ('at a business'), and distinguishes it from siblings like 'cancel_booking' or 'reschedule_booking' by focusing on creation rather than modification or cancellation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It provides explicit guidance on when to use this tool ('Before calling this tool, you MUST ask the user for their name, email, and optionally phone number if you do not already have this information') and includes a clear prohibition ('Do not guess or fabricate customer details'), which helps differentiate it from tools like 'get_booking' that don't require such validation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_availabilityCInspect
Check available time slots for a specific service at a business. All datetimes are in UTC.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max slots to return (1-100, default 20) | |
| cursor | No | Pagination cursor from previous response | |
| date_to | Yes | End of date range — either 'YYYY-MM-DD' or full ISO 8601 UTC timestamp | |
| staff_id | No | Optional staff member ID to filter availability | |
| date_from | Yes | Start of date range — either 'YYYY-MM-DD' or full ISO 8601 UTC timestamp | |
| service_id | Yes | The service ID (from list_services) | |
| location_id | Yes | The UUID of the location |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It only mentions that 'All datetimes are in UTC,' which is a minor constraint. It fails to describe key behaviors such as pagination (implied by the 'cursor' parameter), rate limits, authentication needs, or what the return format looks like. For a tool with 7 parameters and no output schema, this is a significant gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise and front-loaded, consisting of only two sentences that directly state the tool's purpose and a key constraint. There is no wasted language, making it efficient and easy to parse, which earns the highest score for conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (7 parameters, no annotations, no output schema), the description is incomplete. It lacks information on behavioral traits, usage guidelines, and output details, leaving significant gaps for an AI agent to understand how to invoke and interpret results effectively. This inadequacy results in a low score.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds no parameter-specific information beyond what is already in the input schema, which has 100% coverage. It implies date-range filtering with 'datetimes are in UTC,' but this is redundant with schema details. With high schema coverage, the baseline score is 3, as the description does not compensate with additional semantic context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Check available time slots for a specific service at a business.' It specifies the verb ('check'), resource ('available time slots'), and scope ('specific service at a business'), but does not explicitly differentiate it from sibling tools like 'list_services' or 'create_booking', which prevents a score of 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions that datetimes are in UTC, which is a technical detail, but offers no context on prerequisites, when to choose this over other tools like 'search_businesses' or 'list_services', or any exclusions. This lack of usage context results in a minimal score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_bookingBInspect
Get details of an existing booking. Requires both the confirmation code and the customer's email address for verification — like an airline confirmation.
| Name | Required | Description | Default |
|---|---|---|---|
| customer_email | Yes | The email address used when booking | |
| confirmation_code | Yes | The 8-character confirmation code from the booking |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the verification requirement ('Requires both the confirmation code and the customer's email address for verification'), which is a key behavioral trait not in the schema. However, it doesn't mention other important aspects like whether this is a read-only operation, error handling, rate limits, or what the response includes. The analogy ('like an airline confirmation') adds some context but is vague.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with two sentences: the first states the purpose, and the second adds verification context. It's front-loaded with the core function. The analogy is slightly extraneous but not wasteful. Overall, it's efficient with little redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 required parameters, no output schema, no annotations), the description is adequate but has gaps. It covers the purpose and verification need, but lacks details on behavioral traits like safety, response format, or error cases. Without annotations or output schema, more context would be helpful for an agent to use it effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents both parameters. The description adds minimal value beyond the schema: it reinforces that both parameters are required for verification and provides an analogy, but doesn't explain semantics like why both are needed or how they interact. Baseline 3 is appropriate since the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and resource 'details of an existing booking', making the purpose specific and understandable. It distinguishes from siblings like 'create_booking' or 'cancel_booking' by focusing on retrieval rather than modification. However, it doesn't explicitly differentiate from other read operations like 'get_availability' or 'get_business_info' beyond the booking focus.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by stating it's for 'existing booking' and requires verification, suggesting it should be used when you have booking credentials. However, it doesn't provide explicit guidance on when to use this versus alternatives like 'get_availability' for checking open slots or 'search_businesses' for finding providers. No exclusions or clear alternatives are named.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_business_infoBInspect
Get detailed information about a specific business location, including name, address, phone, timezone, hours, photos, ratings, cancellation policy, and connected platforms.
| Name | Required | Description | Default |
|---|---|---|---|
| location_id | Yes | The UUID of the location to look up |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It describes what data is returned but doesn't disclose behavioral traits like authentication requirements, rate limits, error conditions, or whether this is a read-only operation. The description is purely informational without operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose and then lists included data fields. Every word adds value with no redundancy or unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read operation with one parameter and no output schema, the description adequately covers what data is returned. However, without annotations or output schema, it lacks details on response format, error handling, and operational constraints, leaving gaps for agent invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'location_id' well-documented in the schema as a UUID. The description doesn't add any parameter-specific information beyond what the schema provides, so it meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get') and resource ('detailed information about a specific business location'), and lists specific data fields included (name, address, phone, etc.). It distinguishes from siblings like 'search_businesses' (which likely returns multiple businesses) by specifying 'specific business location', though it doesn't explicitly name alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when detailed information about a single business location is needed, but doesn't explicitly state when to use this vs. alternatives like 'search_businesses' (for multiple businesses) or 'get_reviews' (for reviews only). No exclusions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_reviewsBInspect
Get reviews for a business location. Returns anonymous ratings and comments from verified bookings. Use this to help users decide between businesses during discovery.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max reviews to return (1-50, default 10) | |
| cursor | No | Pagination cursor from previous response | |
| location_id | Yes | The UUID of the location |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions that reviews are 'anonymous' and from 'verified bookings', which adds useful context about data sources and privacy. However, it doesn't disclose critical behavioral traits like whether this is a read-only operation (implied but not stated), pagination behavior (though 'cursor' parameter hints at it), rate limits, authentication requirements, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise and front-loaded: the first sentence states the core purpose, and the second sentence provides usage context. Every sentence earns its place with no wasted words, repetition, or unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (3 parameters, no output schema, no annotations), the description is minimally complete. It covers the purpose and high-level usage but lacks details on behavioral traits, output format, or error handling. Without annotations or output schema, the agent must infer these from the description and parameter names, leaving gaps in understanding the full tool behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all three parameters (location_id, limit, cursor). The description doesn't add any parameter-specific information beyond what's in the schema, such as explaining how 'location_id' relates to businesses or how pagination works with 'cursor'. The baseline score of 3 reflects adequate coverage through the schema alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Get') and resource ('reviews for a business location'), distinguishing it from siblings like 'submit_review' (which creates reviews) and 'get_business_info' (which provides business details). However, it doesn't explicitly differentiate from other read operations like 'get_booking' or 'get_availability' beyond the resource type.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implied usage context ('to help users decide between businesses during discovery'), suggesting this tool is for discovery scenarios. However, it lacks explicit guidance on when to use this versus alternatives like 'search_businesses' (which might include reviews) or 'get_business_info' (which might contain summary ratings), and doesn't mention prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_servicesBInspect
List all services offered by a business. Returns service details including name, duration, price, and whether each service is bookable online.
| Name | Required | Description | Default |
|---|---|---|---|
| location_id | Yes | The UUID of the location |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It mentions the return format (service details including name, duration, price, bookable status) which is helpful, but doesn't disclose behavioral aspects like whether this is a read-only operation, potential rate limits, authentication requirements, or pagination behavior for large result sets.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences that each earn their place. The first sentence states the purpose, the second specifies the return format. No wasted words, front-loaded with the core functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read operation with one parameter and no output schema, the description is adequate but has gaps. It explains what the tool does and what it returns, but lacks behavioral context (authentication, rate limits, pagination) and usage guidance. The absence of annotations means more burden falls on the description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with the location_id parameter fully documented in the schema. The description doesn't add any parameter-specific information beyond what the schema provides. The baseline score of 3 is appropriate when the schema handles parameter documentation effectively.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'List' and resource 'services offered by a business', specifying the exact action and target. It distinguishes from siblings like 'get_business_info' or 'search_businesses' by focusing specifically on services rather than general business details or search functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites like needing a location_id, nor does it compare with siblings like 'get_business_info' which might include service information. Usage context is implied but not explicitly stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
reschedule_bookingAInspect
Reschedule an existing booking to a new time. The confirmation code stays the same. Requires both the confirmation code and customer email. Check availability first with get_availability to find open slots.
| Name | Required | Description | Default |
|---|---|---|---|
| staff_id | No | Optional: change to a different staff member | |
| customer_email | Yes | The email address used when booking | |
| new_start_time | Yes | New appointment start time in ISO 8601 UTC (e.g., '2026-04-08T14:00:00Z') | |
| confirmation_code | Yes | The 8-character confirmation code from the booking |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses that the confirmation code stays the same and requires both confirmation code and customer email, but lacks details on permissions, error handling, or what happens if the new time is unavailable. This is adequate but has gaps for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by key constraints and a usage tip, all in three concise sentences with no wasted words or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations and no output schema, the description covers the basic operation, constraints, and a sibling reference, but could benefit from more behavioral details like success/error responses or prerequisites. It's mostly complete but not exhaustive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal value by implying the confirmation code and customer email are required, but doesn't provide additional syntax or format details beyond what the schema specifies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Reschedule an existing booking to a new time'), identifies the resource ('booking'), and distinguishes it from siblings like 'cancel_booking' or 'create_booking' by focusing on time changes while preserving the confirmation code.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It explicitly provides when to use this tool ('Reschedule an existing booking') and includes a clear alternative ('Check availability first with get_availability to find open slots'), with no misleading or missing guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_businessesAInspect
Search for businesses that can be booked instantly. When a user wants to find or book a service (haircut, massage, salon, etc.), use this tool FIRST — it returns businesses with real-time availability that can be booked immediately, including photos, ratings, hours, and pricing. Supports nearby search when latitude/longitude are provided.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results to return (1-100, default 20) | |
| query | No | Search term (business name, service type, etc.). Omit to list all businesses. | |
| cursor | No | Pagination cursor from previous response | |
| latitude | No | User's latitude for nearby search. Use with longitude. | |
| location | No | City, state, or zip code to search near | |
| longitude | No | User's longitude for nearby search. Use with latitude. | |
| radius_km | No | Search radius in kilometers (default 50, max 200). Only used with lat/lng. | |
| service_type | No | Type of service (e.g., 'haircut', 'massage') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: the tool returns businesses with real-time availability for immediate booking, supports nearby search with latitude/longitude, and includes specific data fields (photos, ratings, etc.). However, it lacks details on error handling, rate limits, authentication requirements, or pagination behavior beyond mentioning a cursor parameter, leaving some gaps in behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in three sentences that each serve a distinct purpose: stating the core functionality, providing usage guidelines, and explaining key features. It's front-loaded with the most critical information ('search for businesses that can be booked instantly') and contains no redundant or unnecessary content, making it highly concise and well-organized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (8 parameters, search functionality) and the absence of both annotations and an output schema, the description does a good job of covering essential context. It explains the tool's purpose, usage guidelines, key behaviors, and supported search types. However, without an output schema, it doesn't detail the structure of returned data (beyond listing fields like photos and ratings), and it omits error cases or performance characteristics, leaving room for improvement in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, providing detailed documentation for all 8 parameters. The description adds minimal value beyond the schema, only implying that latitude/longitude enable nearby search and that the tool supports service-type searches. Since the schema already covers parameter meanings thoroughly, the baseline score of 3 is appropriate, as the description doesn't significantly enhance parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('search for businesses that can be booked instantly') and distinguishes it from siblings by emphasizing real-time availability and immediate booking capability. It explicitly mentions the types of services (haircut, massage, salon) and the information returned (photos, ratings, hours, pricing), making it highly specific and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('use this tool FIRST when a user wants to find or book a service') and distinguishes it from alternatives by focusing on businesses with real-time availability for immediate booking. This contrasts with sibling tools like 'get_availability' or 'list_services' that might not emphasize instant booking, offering clear contextual direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
submit_reviewAInspect
Submit a review for a completed booking. The user must have a confirmed booking at this business. Ask the user for their rating (1-5 stars) and an optional comment. Do not submit a review without the user explicitly providing a rating.
| Name | Required | Description | Default |
|---|---|---|---|
| rating | Yes | Rating from 1 to 5 stars | |
| comment | No | Optional text review or comment | |
| customer_email | Yes | The email address used when booking | |
| confirmation_code | Yes | The 8-character confirmation code from the booking |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It indicates this is a mutation tool (submitting a review) and includes a constraint about requiring explicit user input for the rating, which adds useful context. However, it lacks details on permissions, error handling, or what happens upon submission (e.g., confirmation, review visibility), leaving gaps in behavioral understanding for an AI agent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with three sentences that are front-loaded: it states the purpose, provides usage prerequisites, and gives an exclusion rule. Each sentence adds value without redundancy, though it could be slightly more structured by explicitly listing parameters or outcomes.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a mutation tool with no annotations and no output schema, the description is moderately complete. It covers the purpose, prerequisites, and a key behavioral rule, but lacks details on what happens after submission (e.g., success response, error cases) or integration with sibling tools, which would enhance completeness for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, clearly documenting all parameters (rating, comment, customer_email, confirmation_code). The description adds minimal value beyond the schema, mentioning the rating range and optional comment, but doesn't provide additional semantics like format examples or business logic for the parameters. Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Submit a review') and resource ('for a completed booking'), making the purpose evident. However, it doesn't explicitly differentiate from sibling tools like 'get_reviews', which retrieves reviews rather than submitting them, leaving room for slight ambiguity in sibling context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for usage: the user must have a confirmed booking at the business, and it specifies to ask for a rating and optional comment. It also includes an explicit exclusion ('Do not submit a review without the user explicitly providing a rating'), but it doesn't mention alternatives like 'get_reviews' for viewing reviews or 'reschedule_booking' for modifying bookings, which could help further distinguish usage scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!