Skip to main content
Glama

Teres — Booking for AI Agents

Ownership verified

Server Details

Teres is the discovery and booking layer for AI agents. Search for real service businesses (barbershops, salons, spas, and more), check live availability, and create bookings directly in their existing systems — no API key required. Businesses connect their Square, and every MCP-compatible agent can find and book with them instantly. One of the first booking servers in the MCP registry.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

10 tools
cancel_bookingAInspect

Cancel an existing booking. Requires both the confirmation code and customer email. Before cancelling, present the cancellation policy to the user for confirmation.

ParametersJSON Schema
NameRequiredDescriptionDefault
customer_emailYesThe email address used when booking
confirmation_codeYesThe 8-character confirmation code from the booking
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It indicates this is a destructive operation ('Cancel') and mentions a required confirmation step, adding useful context. However, it lacks details on permissions, rate limits, or what happens post-cancellation (e.g., refunds, notifications), leaving gaps for a mutation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core action ('Cancel an existing booking') and efficiently adds two additional sentences for requirements and guidelines. Every sentence earns its place by providing essential information without redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a destructive tool with no annotations and no output schema, the description is adequate but incomplete. It covers the action and a key guideline, but lacks details on behavioral traits like error handling or post-operation effects. Given the complexity, it meets minimum viability but has clear gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with both parameters clearly documented in the schema (e.g., 'The email address used when booking'). The description adds minimal value beyond this, only implying that both parameters are required without explaining why. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Cancel') and resource ('an existing booking'), making the purpose immediately understandable. It distinguishes from siblings like 'reschedule_booking' by focusing on cancellation rather than modification. However, it doesn't explicitly differentiate from other destructive operations like 'create_booking' in terms of outcome, keeping it from a perfect 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Before cancelling, present the cancellation policy to the user for confirmation'), which implies a prerequisite step. It doesn't explicitly name alternatives like 'reschedule_booking' or state when not to use it, but the guidance is practical and actionable for an agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_bookingAInspect

Create a new booking/appointment at a business. Requires customer information (name and email) and a selected time slot. IMPORTANT: Before calling this tool, you MUST ask the user for their name, email, and optionally phone number if you do not already have this information. Do not guess or fabricate customer details. Returns a booking confirmation with a unique booking_id.

ParametersJSON Schema
NameRequiredDescriptionDefault
notesNoOptional booking notes
customerYesCustomer contact information
staff_idNoOptional preferred staff member ID
service_idYesThe service ID (from list_services)
start_timeYesAppointment start time in ISO 8601 UTC (e.g., '2026-04-05T14:00:00Z')
location_idYesThe UUID of the location to book with
idempotency_keyYesUnique key to prevent duplicate bookings
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it's a write operation (implied by 'Create'), requires customer information, has important pre-call requirements about gathering user data, and describes the return value ('Returns a booking confirmation with a unique booking_id'). It doesn't mention error conditions or rate limits, but covers the essential behavior adequately.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with three sentences that each earn their place: first states purpose, second gives critical usage guidelines, third describes return value. It's front-loaded with the core function. Could be slightly more concise by combining some elements, but overall efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a creation tool with no annotations and no output schema, the description does well by covering purpose, prerequisites, and return value. It doesn't explain potential errors or the idempotency_key parameter's significance, but given the 100% schema coverage and clear behavioral guidance, it's mostly complete. The main gap is lack of output schema explanation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds some context by mentioning 'customer information (name and email) and a selected time slot' which maps to the 'customer' and 'start_time' parameters, but doesn't provide additional semantic meaning beyond what the schema already documents for all 7 parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Create a new booking/appointment'), identifies the resource ('at a business'), and distinguishes from siblings like 'cancel_booking' or 'reschedule_booking' by focusing on creation. It provides both the core function and the required inputs.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('Create a new booking/appointment') and provides strong guidance on prerequisites: 'Before calling this tool, you MUST ask the user for their name, email, and optionally phone number if you do not already have this information. Do not guess or fabricate customer details.' This clearly differentiates it from read-only siblings like 'get_booking' or 'get_availability'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_availabilityCInspect

Check available time slots for a specific service at a business. All datetimes are in UTC.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax slots to return (1-100, default 20)
cursorNoPagination cursor from previous response
date_toYesEnd of date range — either 'YYYY-MM-DD' or full ISO 8601 UTC timestamp
staff_idNoOptional staff member ID to filter availability
date_fromYesStart of date range — either 'YYYY-MM-DD' or full ISO 8601 UTC timestamp
service_idYesThe service ID (from list_services)
location_idYesThe UUID of the location
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It only mentions that 'All datetimes are in UTC,' which adds some context about timezone handling. However, it doesn't describe important behaviors like whether this is a read-only operation, what the response format looks like, pagination behavior (though cursor parameter hints at it), rate limits, authentication requirements, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with just two sentences that each earn their place. The first sentence states the core purpose, and the second provides essential timezone context. There's zero wasted language, and it's appropriately front-loaded with the main functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 7 parameters, no annotations, and no output schema, the description is insufficiently complete. It doesn't explain what the tool returns (availability slots format), doesn't mention pagination behavior despite having a cursor parameter, and provides minimal guidance on usage context. The description should do more to compensate for the lack of structured metadata about this read operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds minimal parameter semantics beyond what the schema provides. It mentions that datetimes are in UTC, which clarifies the timezone context for date parameters, but doesn't explain relationships between parameters or provide additional context. With 100% schema description coverage, the baseline is 3, and the description doesn't significantly enhance understanding beyond the schema's thorough documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Check available time slots for a specific service at a business.' It specifies the verb ('check'), resource ('available time slots'), and scope ('specific service at a business'), but doesn't explicitly differentiate it from sibling tools like 'list_services' or 'search_businesses' that might also provide availability-related information.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'list_services' (which might provide service IDs needed as input) or 'search_businesses' (which might provide location IDs). There's no context about prerequisites, typical workflows, or when not to use this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_bookingAInspect

Get details of an existing booking. Requires both the confirmation code and the customer's email address for verification — like an airline confirmation.

ParametersJSON Schema
NameRequiredDescriptionDefault
customer_emailYesThe email address used when booking
confirmation_codeYesThe 8-character confirmation code from the booking
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing the verification requirement (both confirmation code and email needed) and providing a helpful analogy ('like an airline confirmation'). It doesn't mention rate limits, authentication needs beyond verification, or what happens if verification fails, but provides useful behavioral context for a read operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two sentences that each earn their place - the first states the purpose, the second explains the verification requirement with a helpful analogy. It's front-loaded with the core purpose and wastes no words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read operation with 2 parameters and no output schema, the description provides adequate context about purpose and verification requirements. However, it doesn't describe what details are returned, potential error conditions, or how the verification failure would manifest, leaving some gaps in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already documents both parameters thoroughly. The description adds minimal value beyond the schema by reinforcing that both parameters are required for verification, but doesn't provide additional semantic context about parameter usage or constraints beyond what's in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get details') and resource ('existing booking'), distinguishing it from siblings like create_booking, cancel_booking, and reschedule_booking. It uses precise language that directly communicates the tool's function without ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use this tool by specifying it's for retrieving details of existing bookings and mentioning the verification requirement. However, it doesn't explicitly state when NOT to use it or name specific alternatives among the sibling tools for different scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_business_infoBInspect

Get detailed information about a specific business location, including name, address, phone, timezone, hours, photos, ratings, cancellation policy, and connected platforms.

ParametersJSON Schema
NameRequiredDescriptionDefault
location_idYesThe UUID of the location to look up
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It describes a read-only operation ('Get') but doesn't disclose behavioral traits like authentication needs, rate limits, error conditions, or whether the data is cached. For a tool with no annotations, this leaves significant gaps in understanding how it behaves.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that front-loads the purpose and efficiently lists the data fields. Every word earns its place, with no redundancy or wasted text, making it highly concise and clear.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given one parameter with full schema coverage and no output schema, the description adequately covers the tool's purpose and data scope. However, with no annotations and no output schema, it lacks details on behavioral aspects and return format, which are important for a read operation. It's minimally viable but has clear gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the single parameter 'location_id' fully documented in the schema as a UUID. The description adds no additional parameter semantics beyond implying it's for a 'specific business location,' which aligns with the schema. Baseline 3 is appropriate since the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and resource ('detailed information about a specific business location'), with a comprehensive list of data fields included. It distinguishes this from siblings like search_businesses (which likely returns multiple businesses) and get_reviews (which focuses on reviews only), though not explicitly named.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by specifying 'a specific business location' and the required location_id parameter, suggesting it's for retrieving details of a known business. However, it doesn't explicitly state when to use this versus alternatives like search_businesses or get_reviews, nor does it mention prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_reviewsBInspect

Get reviews for a business location. Returns anonymous ratings and comments from verified bookings. Use this to help users decide between businesses during discovery.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax reviews to return (1-50, default 10)
cursorNoPagination cursor from previous response
location_idYesThe UUID of the location
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It mentions the return content (anonymous ratings/comments from verified bookings) and hints at pagination via 'cursor' in the schema, but doesn't disclose behavioral traits like rate limits, authentication needs, error conditions, or whether it's read-only (implied by 'get' but not explicit). For a tool with no annotations, this leaves significant gaps in understanding its operational behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise with two sentences: the first states the purpose and return content, the second provides usage context. It's front-loaded with core functionality and avoids redundancy. However, the second sentence could be slightly more specific to improve efficiency.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description is moderately complete for a read operation. It covers the purpose and high-level return content but lacks details on behavioral traits (e.g., safety, errors) and output structure. For a tool with 3 parameters and no structured output documentation, this leaves room for improvement in guiding the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents parameters (location_id, limit, cursor). The description adds no parameter-specific semantics beyond what's in the schema—it doesn't explain how location_id relates to businesses or how reviews are filtered/ordered. Baseline 3 is appropriate as the schema does the heavy lifting, but the description doesn't compensate with additional context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get reviews for a business location' specifies the verb (get) and resource (reviews), and 'Returns anonymous ratings and comments from verified bookings' adds detail about the return content. However, it doesn't explicitly differentiate from sibling tools like 'get_business_info' or 'submit_review' beyond mentioning it's for 'discovery' purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides some usage context: 'Use this to help users decide between businesses during discovery' implies it's for research/decision-making scenarios. However, it doesn't specify when to use this versus alternatives like 'get_business_info' or 'search_businesses', nor does it mention prerequisites or exclusions (e.g., requires a location_id).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_servicesAInspect

List all services offered by a business. Returns service details including name, duration, price, and whether each service is bookable online.

ParametersJSON Schema
NameRequiredDescriptionDefault
location_idYesThe UUID of the location
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the return format (service details including name, duration, price, bookable status) which is helpful, but doesn't mention behavioral aspects like pagination, rate limits, authentication requirements, or error conditions. The description doesn't contradict any annotations since none exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with zero waste: the first states the purpose and required parameter, the second specifies the return format. Every word adds value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read operation with one parameter and no output schema, the description provides adequate purpose and return format. However, with no annotations and no output schema, it should ideally mention more behavioral context (like whether this requires authentication or has pagination). The description is complete enough for basic use but lacks depth for robust agent understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the single 'location_id' parameter as a UUID. The description doesn't add any parameter-specific information beyond what's in the schema, maintaining the baseline score for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'list' and resource 'services offered by a business', specifying the scope as 'all services'. It distinguishes from siblings like 'search_businesses' (which finds businesses) and 'get_business_info' (which retrieves business metadata rather than service listings).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when needing service details, but doesn't explicitly state when to use this tool versus alternatives like 'get_availability' (which might show service slots) or 'search_businesses' (which finds businesses rather than listing their services). No exclusions or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

reschedule_bookingAInspect

Reschedule an existing booking to a new time. The confirmation code stays the same. Requires both the confirmation code and customer email. Check availability first with get_availability to find open slots.

ParametersJSON Schema
NameRequiredDescriptionDefault
staff_idNoOptional: change to a different staff member
customer_emailYesThe email address used when booking
new_start_timeYesNew appointment start time in ISO 8601 UTC (e.g., '2026-04-08T14:00:00Z')
confirmation_codeYesThe 8-character confirmation code from the booking
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that the tool is a mutation ('reschedule'), requires specific inputs (confirmation code and email), and suggests a prerequisite action (check availability). However, it lacks details on permissions, error conditions, rate limits, or what happens if the new time is unavailable, leaving behavioral gaps for a mutation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by key constraints and usage guidance in subsequent sentences. Every sentence adds value (e.g., confirmation code behavior, required inputs, alternative tool), with zero waste or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a mutation tool with no annotations and no output schema, the description is adequate but incomplete. It covers purpose, usage, and some constraints, but lacks details on behavioral aspects like error handling, response format, or side effects, which are important given the tool's complexity and lack of structured metadata.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal value beyond the schema by implying that confirmation code and customer email are required together, but it doesn't provide additional syntax or format details. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('reschedule an existing booking to a new time') and resource ('booking'), distinguishing it from siblings like cancel_booking or create_booking. It mentions that the confirmation code stays the same, adding specificity about what changes versus what remains constant.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly provides when to use this tool ('reschedule an existing booking') and includes a clear alternative ('check availability first with get_availability to find open slots'), which helps differentiate it from other booking-related tools. It also specifies prerequisites ('requires both the confirmation code and customer email').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_businessesAInspect

Search for businesses that can be booked instantly. When a user wants to find or book a service (haircut, massage, salon, etc.), use this tool FIRST — it returns businesses with real-time availability that can be booked immediately, including photos, ratings, hours, and pricing. Supports nearby search when latitude/longitude are provided.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results to return (1-100, default 20)
queryNoSearch term (business name, service type, etc.). Omit to list all businesses.
cursorNoPagination cursor from previous response
latitudeNoUser's latitude for nearby search. Use with longitude.
locationNoCity, state, or zip code to search near
longitudeNoUser's longitude for nearby search. Use with latitude.
radius_kmNoSearch radius in kilometers (default 50, max 200). Only used with lat/lng.
service_typeNoType of service (e.g., 'haircut', 'massage')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that the tool returns businesses with photos, ratings, hours, and pricing, and supports nearby search with lat/lng. However, it lacks details on permissions, rate limits, error handling, or pagination behavior beyond cursor mention, leaving gaps for a search tool with 8 parameters.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states purpose and usage guidelines, the second adds key features and parameter context. Every sentence adds value without redundancy, making it front-loaded and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description does well on purpose and guidelines but lacks behavioral details like response format, error cases, or authentication needs. For a search tool with 8 parameters and siblings, it's adequate but incomplete, especially without output schema to clarify return values.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 8 parameters thoroughly. The description adds minimal value by mentioning latitude/longitude for nearby search and implying query usage, but doesn't provide additional syntax, format, or interaction details beyond what's in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches for businesses that can be booked instantly, specifying the action (search) and resource (businesses). It distinguishes from siblings by emphasizing real-time availability and immediate booking capability, unlike tools like get_business_info or get_availability which focus on specific details rather than search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states 'use this tool FIRST' when a user wants to find or book a service, providing clear when-to-use guidance. It differentiates from alternatives by focusing on businesses with real-time availability, unlike get_business_info which retrieves details or list_services which lists services without booking context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

submit_reviewAInspect

Submit a review for a completed booking. The user must have a confirmed booking at this business. Ask the user for their rating (1-5 stars) and an optional comment. Do not submit a review without the user explicitly providing a rating.

ParametersJSON Schema
NameRequiredDescriptionDefault
ratingYesRating from 1 to 5 stars
commentNoOptional text review or comment
customer_emailYesThe email address used when booking
confirmation_codeYesThe 8-character confirmation code from the booking
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that this is a write operation ('submit'), mentions a prerequisite (confirmed booking), and includes a safety warning about requiring explicit user input. However, it lacks details on permissions, rate limits, idempotency, or what happens on success/failure. The description doesn't contradict annotations (none exist).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and well-structured: three sentences that efficiently cover purpose, prerequisites, and usage instructions. Every sentence adds value—no redundancy or fluff. It's appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description provides basic purpose and usage context but lacks behavioral details (e.g., error handling, response format). It's adequate for a simple submission tool but doesn't fully compensate for the missing structured data, leaving gaps in transparency.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all parameters. The description adds no parameter-specific semantics beyond what's in the schema (e.g., it mentions 'rating' and 'optional comment' but the schema already describes these). Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Submit a review for a completed booking.' It specifies the action (submit) and resource (review), and distinguishes it from siblings like get_reviews or create_booking. However, it doesn't explicitly differentiate from hypothetical review-related tools (e.g., edit_review), though none exist in the sibling list.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use the tool: 'The user must have a confirmed booking at this business.' It also gives guidance on prerequisites: 'Ask the user for their rating... Do not submit a review without the user explicitly providing a rating.' However, it doesn't explicitly mention when NOT to use it or name alternatives (e.g., get_reviews for viewing).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources