Skip to main content
Glama

AdvocateMCP

Server Details

MCP layer for local businesses: discover, query, book, and transact with verified SMB AI agents.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.2/5 across 10 of 10 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct resource or action: get_* tools return different specific information, reserve_slot handles booking, initiate_handoff/request_callback manage human interaction, query_business_agent queries another agent, search_businesses discovers businesses, and subscribe_to_updates handles subscriptions. No overlapping purposes.

Naming Consistency5/5

All tools follow a consistent verb_noun pattern with lowercase and underscores (e.g., get_availability, initiate_handoff, reserve_slot). No mixing of styles or unexpected conventions.

Tool Count5/5

10 tools is well-scoped for a business advocacy domain, covering discovery, information retrieval, booking, communication, and subscription without being too few or too many.

Completeness4/5

The tool surface covers core workflows: search, get details, reserve, handoff, callback, subscribe. Minor gap: no cancel_reservation or update_reservation tool, but the primary lifecycle for booking is present.

Available Tools

10 tools
get_availabilityA
Read-only
Inspect

Return 30-minute availability windows for a business from its hours_json. v1 is synthetic; v2 will consult availability_webhook_url when set.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYesbusiness slug
window_endNoUnix seconds; default now + 7 days
window_startNoUnix seconds; default now
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false. The description adds versioning details and data source (hours_json) but does not disclose potential errors, data limits, or response pagination, providing modest additional transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: first defines core functionality, second offers version context. No redundant words, front-loaded with essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description explains what the tool does and version differences, but lacks details on return format (e.g., array of time slots), default values for window boundaries, and error handling, which are needed for full understanding without an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with clear parameter descriptions for slug, window_end, and window_start. The tool description does not add new meaning beyond what the schema already provides, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns 30-minute availability windows for a business using its hours_json, which is specific and distinguishable from siblings like reserve_slot and get_quote.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description hints at versioning (v1 synthetic, v2 webhook) but does not explicitly state when to use this tool versus alternatives like reserve_slot or get_cancellation_policy, leaving usage context implied.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_cancellation_policyA
Read-only
Inspect

Returns the business's cancellation / refund / no-show policy as a verbatim string the agent can quote. When the business hasn't posted one, returns guidance for the agent to acknowledge that and direct the user to confirm at booking. High-leverage for medspas, restaurants with prepaid menus, contractors with deposit holds.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYesbusiness slug
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true and destructiveHint=false. The description adds that the return value is a verbatim string to quote and describes the behavior when the policy is missing (returns guidance). This provides useful behavioral context beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loading the core purpose and output, followed by edge-case handling and usage context. Every sentence adds value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple nature (one parameter, no output schema), the description covers the return format, edge-case behavior, and relevant use cases. It is complete for the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has one parameter (slug) with description 'business slug', achieving 100% coverage. The description does not add additional parameter information, but the schema already sufficiently documents it. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns the cancellation/refund/no-show policy as a verbatim string, specifying both the resource and the action. It distinguishes itself from siblings like get_availability or get_quote by focusing on policy retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides guidance on when to use the tool (when policy is needed) and how to handle the case when no policy is posted (acknowledge and direct to confirm at booking). It also notes high-leverage contexts like medspas, restaurants, and contractors, but does not explicitly mention when not to use or list alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_credentialsA
Read-only
Inspect

Returns the business's self-reported licenses, insurance, bonding, and certifications. Use this for trust-sensitive verticals (contractors, healthcare, legal, locksmiths) when a user asks 'are they licensed?' or 'are they insured?'. The response carries explicit 'self-reported' framing so agents don't upgrade tenant claims to verified facts.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYesbusiness slug
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond annotations (readOnlyHint), the description adds that the response is self-reported and warns against treating it as verified, which is critical behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences. First sentence states purpose, second provides usage context. No fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given simple tool with one param and no output schema, the description fully covers purpose, usage, and data nature. Annotations provide safety profile.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Only one parameter 'slug' with 100% schema coverage. Description does not add meaning beyond the schema's 'business slug'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns self-reported licenses, insurance, bonding, and certifications. It distinguishes itself from siblings like get_availability by focusing on credentials.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use (trust-sensitive verticals, user queries like 'are they licensed?'). Also provides guidance on not upgrading self-reported claims to verified facts.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_quoteA
Read-only
Inspect

Quote price for a service at a business. Deterministic lookup of pricing_json_v2.ranges[]; LLM fallback on miss, labelled 'estimate' with disclaimer.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYesbusiness slug
paramsNooptional service parameters (e.g., {size:'large'})
serviceYesrequested service name
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds behavioral details beyond annotations: deterministic lookup, LLM fallback, 'estimate' label, disclaimer. No contradiction with readOnlyHint=true.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no redundancy. Front-loaded with purpose, then fallback behavior. Every word adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, description explains both outcomes (deterministic result or labeled estimate). Covers fallback and disclaimer, making it complete for agent understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so description adds minimal extra meaning. It mentions business slug and service name but doesn't expand on 'params' object beyond schema description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states verb 'Quote price' and resources 'service at a business'. Specifies deterministic lookup with LLM fallback, distinguishing it from siblings like get_availability or reserve_slot.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explains deterministic lookup first, then LLM fallback with 'estimate' labeling. Implicitly suggests when precision is needed, but doesn't explicitly exclude alternatives or compare to siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

initiate_handoffA
Destructive
Inspect

Begin a handoff from the agent to either a human operator (SMS/email via lead_routing_json) or another agent (signed continuation URL).

ParametersJSON Schema
NameRequiredDescriptionDefault
modeYes
slugYes
payloadYes
reservation_idNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate destructive and open-world behavior. The description adds concrete mechanisms (SMS/email via lead_routing_json, signed continuation URL) but does not disclose side effects like whether the current conversation terminates or how the handoff is tracked. Moderate value beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, directly states purpose and key distinction. No filler words; front-loaded with the action verb 'Begin'.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a destructive, open-world tool with no output schema, the description should clarify what happens after handoff (e.g., does the current agent session end?) and the role of 'lead_routing_json'. It covers the main function but misses operational context that an agent needs to predict consequences.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, so the description must explain parameters. It only clarifies the 'mode' enum's meaning (human/agent) but omits 'slug', 'payload', and 'reservation_id'. Agents may guess that 'slug' identifies the target or context, but this is insufficient for correct invocation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool initiates a handoff, distinguishing between human (via SMS/email) and agent (via continuation URL). It is specific and differentiates from siblings like 'query_business_agent' or 'request_callback', which are for direct assistance, not handoff.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when a handoff is needed, listing two modes. However, it lacks explicit guidance on when not to use or prerequisites (e.g., whether a session must be active). The context is clear but not exhaustive.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

query_business_agentAInspect

Query a registered business's AI advocate agent. Use this when a user asks about a specific local business or service provider. Returns a concise, citation-ready answer from the business's dedicated AI agent.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYesThe business slug identifier (e.g. 'joes-pizza-austin'). Use search_businesses first if you don't know the slug.
queryYesThe user's question about this business
stageNoOptional buyer stage. 'browsing' (default) — exploring options. 'comparing' — weighing alternatives. 'committing' — ready to act. When omitted, the server infers from query verbs (e.g. 'book'/'reserve' → committing).
agent_idNoOptional caller-asserted agent identifier (e.g. 'claude-desktop', 'cursor', 'gpt-agent'). Used to tune the response shape. May be overridden by the x-agent-identity header. Self-asserted only in v1 — not used for auth or rate limiting.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=false (potential side effects), but the description implies a read-only query by saying 'returns an answer' without mentioning any mutations or side effects, creating a contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise, front-loaded sentences with no superfluous content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, the description fails to describe the return format (e.g., citation-ready answer structure) or mention potential side effects (per readOnlyHint=false), leaving gaps for a 4-parameter tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the main description adds no parameter detail beyond what the schema provides. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool queries a business's AI agent and returns a citation-ready answer, distinguishing it from siblings like search_businesses (searching for businesses) and others.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says to use when asking about a specific business, and the slug parameter description references search_businesses for unknown slugs, providing clear guidance and alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

request_callbackA
Destructive
Inspect

Push a user's contact info to a business so they can call/email/text back. Use this when a question can't be answered without human contact (custom quote, after-hours scheduling, complaint, complex combo). Idempotent on idempotency_key within a 24h window — agent retries don't spam the business. Returns delivery status the agent can quote to the user.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYesbusiness slug
reasonNoWhy the user wants the callback — passed verbatim to the business so they can prep
contactYes
urgencyNohow time-sensitive the request is (default: normal)
agent_idNoOptional caller-asserted agent identifier; recorded for attribution
idempotency_keyYesIdempotency key — same key returns the same callback_request_id without dup-creating
preferred_channelNochannel the user prefers to be contacted on (default: any)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: idempotency on idempotency_key within 24 hours, ensuring retries don't spam, and that it returns delivery status. Annotations indicate destructiveHint=true, and the description confirms it creates a request. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, each earning its place: purpose, usage guidance, and idempotency/return value. Front-loaded with the core action. No fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema, the description covers the return value ('delivery status') adequately. It does not detail error handling or failure modes, but given the simplicity of use and the exhaustive parameter documentation, it is sufficiently complete for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 86%, so the schema already explains most parameters well. The description reinforces the idempotency_key behavior, but does not add new meaning to other parameters. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Push a user's contact info') and the resource ('to a business so they can call/email/text back'). It distinguishes itself from siblings (e.g., 'get_quote', 'initiate_handoff') by focusing on asynchronous callback requests rather than immediate answers or other actions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'Use this when a question can't be answered without human contact' and provides specific examples (custom quote, after-hours scheduling, complaint, complex combo). It does not list alternatives or when not to use, but the context is clear enough for an AI agent to decide appropriately.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

reserve_slotA
Destructive
Inspect

Create a 15-minute HELD reservation. Return a confirmation_token the agent posts to /a2a/confirm to flip to CONFIRMED.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYes
agent_idNo
window_endYes
window_startYes
idempotency_keyYes
customer_contactYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds behavioral context beyond the annotations: it explains the reservation starts as HELD and requires a separate confirmation step. Annotations already indicate destructiveHint=true (create operation), and the description aligns with and supplements this.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two succinct sentences with no unnecessary words. The first sentence states the core purpose, the second adds critical workflow context. Front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Considering the complexity (6 params, nesting, no output schema), the description provides a basic workflow but omits parameter details and return structure (e.g., confirmation_token format). This forces the agent to rely on parameter names, which may be insufficient for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, and the description does not explain any parameter meanings (e.g., slug, window_start, window_end, customer_contact). While it mentions '15-minute' duration, it fails to clarify the format or purpose of the parameters, leaving the agent to infer from names alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Create'), the resource ('15-minute HELD reservation'), and the outcome (return a confirmation_token). It distinguishes from sibling tools like get_availability and get_quote by specifying a unique creation and worklow step.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use this tool—when creating a held reservation—and mentions the follow-up action of posting the token to /a2a/confirm. However, it does not explicitly state when not to use it or compare to alternatives, missing an opportunity for clearer guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_businessesA
Read-only
Inspect

Search for registered businesses by category, name, or location. Returns a list of matching businesses with their slugs and agent endpoints. Use this to discover which businesses are available before querying one.

ParametersJSON Schema
NameRequiredDescriptionDefault
searchYesSearch term — matched against business name, description, and services
locationNoOptional location filter (city, state, or region). Narrows results geographically.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true and destructiveHint=false. The description adds that the tool returns 'slugs and agent endpoints' and that search matches against business name, description, and services. No contradictions. It does not mention pagination or result limits, but for a read-only search, the transparency is good.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with no fluff: first sentence states action and output, second provides usage guidance. Every sentence adds value, and the structure is front-loaded with the most important information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description explains the return format (list with slugs and endpoints) and usage context. It does not detail all possible returned fields or mention result limits. Slightly incomplete for a discovery tool, but adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, with both parameters having clear descriptions. The tool description does not add additional semantics beyond the schema for the parameters. Baseline 3 is appropriate as the schema already does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Search for registered businesses by category, name, or location' and specifies the return of 'slugs and agent endpoints'. It distinguishes from sibling tools like query_business_agent by positioning itself as a discovery step before querying a specific business.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'Use this to discover which businesses are available before querying one', providing clear context for when to use. It does not explicitly list when not to use or name alternatives, but the context is sufficient given sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

subscribe_to_updatesA
Destructive
Inspect

Subscribe an end-user's email to topical updates from a business (deals, schedule changes, new services). Returns a confirmation_token + confirmation_url; the user MUST click the URL within 7 days to activate. Re-subscribing an already-confirmed email merges topics without re-confirming.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYesbusiness slug
topicsYesTopic tags the user wants updates on (e.g., ['deals', 'schedule_changes', 'new_services'])
agent_idNoOptional caller-asserted agent identifier; recorded for attribution
contact_emailYesEmail to subscribe — must be confirmed via the returned token before any updates send
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description goes beyond annotations by detailing the confirmation requirement, 7-day activation window, and merge behavior on re-subscription. Annotations only indicate destructiveHint=true; the description clarifies the actual mutation is creating a pending subscription.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with no fluff. The first sentence states purpose and examples; the second covers necessary post-call action and edge case. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description sufficiently explains return values (confirmation_token, confirmation_url) and the required user action. It covers the main use case and re-subscription edge case, making it complete for agent invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds context for agent_id ('caller-asserted') and reiterates the confirmation requirement for contact_email. It provides a coherent picture of how parameters relate to the subscription flow.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'subscribe' and the resource 'an end-user's email to topical updates', listing example topics. It distinguishes from sibling tools like get_availability or get_quote, which serve different purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains the post-call flow: a confirmation token/URL must be clicked within 7 days, and re-subscribing merges topics. It implies when to use (for subscribing) but does not explicitly compare with alternatives, though siblings are unrelated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources