Skip to main content
Glama

agent-broker

Server Details

AI agents find and book with SMBs. 13 MCP tools, TCPA/GDPR/CASL compliance.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
basilalshukaili/agentbroker
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.4/5 across 13 of 13 tools scored. Lowest: 3.7/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: lead capture, booking, messaging, business discovery, verification, inbound handling, escalation, cost preview, status/outcome retrieval, and self-test. Even similar tools like send_message and send_transactional_confirmation are well-delineated by reliability and use case.

Naming Consistency5/5

All 13 tools use a consistent verb_noun pattern in snake_case (e.g., capture_lead, schedule_appointment, get_outcome), making it easy to infer functionality from names.

Tool Count5/5

With 13 tools, the server is well-scoped for its domain of SMB operations. Each tool earns its place, covering the essential workflows without unnecessary bloat.

Completeness4/5

The tool set covers the core lifecycle: find, verify, capture lead, book, message (general and transactional), handle inbound, escalate, preview cost, and check status. A minor gap is the absence of a tool to list existing leads or appointments, but the primary workflows are complete.

Available Tools

13 tools
capture_leadA
Idempotent
Inspect

Structured intake of a prospect into an SMB's funnel with validation, enrichment hooks, and deduplication. Inserts into the SMB's CRM or direct-booking pipeline if available.

EXAMPLE USER QUERIES THAT MATCH THIS TOOL: user: "Tell smb_xyz I'm interested and want a callback" -> call capture_lead({"smb_id": "smb_xyz", "prospect": {"name": "Jane", "phone": "+15551234567", "email": "jane@example.com"}, "source": "agent"})

WHEN TO USE: Use when a potential customer has expressed interest in an SMB's service and you want to ensure they are registered in the SMB's pipeline for follow-up. WHEN NOT TO USE: Do not use for confirmed bookings — use schedule_appointment. Do not use for bulk list imports. COST: $varies per_lead LATENCY: ~variesms EXECUTION: sync_fast (use get_outcome to retrieve result)

ParametersJSON Schema
NameRequiredDescriptionDefault
smb_idYes
sourceNoChannel or campaign that sourced this lead
prospectYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate idempotentHint=true, readOnlyHint=false, and destructiveHint=false. The description adds context about validation, enrichment, deduplication, and that execution is sync_fast with result retrieval via get_outcome. It does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with sections for purpose, example, when/not to use, and metadata. It is longer than necessary but front-loaded and every section adds value. The example query is particularly helpful.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (nested object, 3 parameters), the absence of an output schema, and the need to call get_outcome for results, the description is somewhat incomplete. It does not explain what the outcome will contain (e.g., lead ID, status) or describe potential error conditions. The information is adequate but could be more thorough.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has low description coverage (33%) – only source and consent_record_id have descriptions. The tool description does not add meaningful detail about the required smb_id or prospect fields beyond their names. It mentions smb_id and source but does not explain their formats or constraints, leaving the agent to infer from the schema alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Structured intake of a prospect into an SMB's funnel with validation, enrichment hooks, and deduplication.' It also provides an example user query and explicitly says it inserts into CRM or direct-booking pipeline. It distinguishes from sibling tools like schedule_appointment and bulk imports.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes explicit 'WHEN TO USE' and 'WHEN NOT TO USE' sections, stating to use when a prospect expresses interest and not for confirmed bookings (use schedule_appointment) or bulk imports. It clearly guides the agent on proper usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

escalate_to_humanA
DestructiveIdempotent
Inspect

Hand off an in-flight task to a human operator with a full context bundle: transcript, prior actions, identifiers, and a recommended next step.

EXAMPLE USER QUERIES THAT MATCH THIS TOOL: user: "I'm stuck — get a human at smb_xyz to call me back" -> call escalate_to_human({"smb_id": "smb_xyz", "reason": "agent_blocked", "summary": "Cannot resolve via automated channels"})

WHEN TO USE: Use when automated resolution has failed after channel-fallback exhaustion, when the task requires human judgment, or when the customer has explicitly requested human contact. WHEN NOT TO USE: Do not use as a first resort. Escalate only after automated resolution attempts. COST: $varies per_escalation LATENCY: ~variesms EXECUTION: async_by_default (use get_outcome to retrieve result)

ParametersJSON Schema
NameRequiredDescriptionDefault
reasonYes
smb_idYes
contextYes
priorityNonormal
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses async execution, cost, latency, and need to retrieve result via get_outcome. No contradiction with annotations (destructiveHint=true aligns with handoff).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Front-loaded with purpose, structured with sections and example. Vague cost/latency lines are minor drawbacks; overall efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers usage, parameters, and async behavior. Refers to get_outcome for results. Lacks explicit return value description, but acceptable given no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 0% schema coverage, description provides an example mapping and explains context structure (transcript, prior actions, etc.). However, reason enum values and priority are not detailed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description states 'Hand off an in-flight task to a human operator' with a specific verb and resource. It clearly distinguishes from siblings as the only escalation tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Includes explicit WHEN TO USE and WHEN NOT TO USE sections, listing conditions like automated resolution failure or customer request, and warning against first resort use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_businessA
Read-onlyIdempotent
Inspect

Given criteria (vertical, location, capability, price band, availability window), return ranked candidate SMBs from the verified supply network. Returns only curated, verified, transactable businesses — not raw directory results.

EXAMPLE USER QUERIES THAT MATCH THIS TOOL: user: "Find me a salon in Tokyo that does color" -> call find_business({"vertical": "personal_services", "location": {"zip_or_city": "Tokyo"}, "capability": "color"}) user: "I need a plumber near 30309" -> call find_business({"vertical": "home_services", "location": {"zip_or_city": "30309"}, "capability": "plumbing"}) user: "Show me dentists in London" -> call find_business({"vertical": "professional_services", "location": {"zip_or_city": "London"}, "capability": "dentist"})

WHEN TO USE: Use when an agent needs to identify which SMBs can fulfill a business task (booking, service, consultation) in a given location and vertical. Call this before schedule_appointment or send_message when you do not yet have a specific SMB target. WHEN NOT TO USE: Do not use as a general directory or browsing surface. Do not use when you already have a specific verified SMB identifier. Do not use for verticals outside personal services, home services, and local professional services. COST: $varies per_call LATENCY: ~variesms

ParametersJSON Schema
NameRequiredDescriptionDefault
locationYes
verticalYesService vertical to search within
capabilityNoSpecific service capability required, e.g. 'haircut', 'plumbing', 'tax_consultation'
price_bandNo
max_resultsNo
availability_windowNo
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds context beyond annotations by noting that it returns only curated/verified/transactable businesses, has variable cost and latency, and does not contradict any annotations. Annotations already indicate it's a safe read, and the description reinforces this.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and well-structured with clear sections, including examples and usage guidelines. Every part provides value, and it avoids unnecessary repetition.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 parameters, nested objects, no output schema), the description provides sufficient context: core functionality, examples, usage boundaries, and cost/latency notes. It is complete for an AI agent to select and use correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has low description coverage (33%), but the description compensates with example user queries and parameter mappings. However, it does not fully explain the meaning of each parameter beyond the examples, which leaves some ambiguity for parameters like price_band and availability_window.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's action: returning ranked candidate SMBs from a verified supply network based on specified criteria. It distinguishes itself from raw directory results by emphasizing curated, verified, and transactable businesses.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes explicit 'WHEN TO USE' and 'WHEN NOT TO USE' sections, providing clear guidance on when to call this tool and when to avoid it, with alternatives like schedule_appointment or send_message when a specific SMB identifier is known.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_outcomeA
Read-onlyIdempotent
Inspect

Retrieve the final OutcomeReceipt for a completed operation.

WHEN TO USE: Use after get_status returns success/failure/partial to retrieve the full result with cost and reason codes. WHEN NOT TO USE: Do not use for operations still in pending/executing state — use get_status first. COST: $varies per_call LATENCY: ~variesms

ParametersJSON Schema
NameRequiredDescriptionDefault
operation_idYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only and idempotent behavior. The description adds context about requiring a completed operation and mentions cost and latency placeholders, adding some value beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with three clear sections: purpose, usage guidelines, and performance. Every sentence adds value, and the structure is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description mentions the return type (OutcomeReceipt) and key elements (cost and reason codes). It covers prerequisites and performance, making it fairly complete for a retrieval tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, and the description does not add any explanation for the operation_id parameter. While the parameter is simple, the description fails to compensate for the lack of schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Retrieve' and the resource 'OutcomeReceipt', specifying it is for a completed operation. It distinguishes itself from sibling tools like get_status by focusing on the final result.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly provides WHEN TO USE and WHEN NOT TO USE, including a reference to get_status as the appropriate precursor. This gives clear decision criteria for the agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_statusA
Read-onlyIdempotent
Inspect

Query the current state of any in-flight async operation by operation_id.

WHEN TO USE: Use to poll the state of a pending_async operation when no webhook callback has arrived or to check progress. WHEN NOT TO USE: Do not poll more frequently than once per 10 seconds — use webhook delivery for real-time updates instead. COST: $varies per_call LATENCY: ~variesms

ParametersJSON Schema
NameRequiredDescriptionDefault
operation_idYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already mark as read-only and idempotent. Description adds context: it polls async operations, mentions cost and latency variability, and advises polling frequency. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise, with structured headings (WHEN TO USE, WHEN NOT TO USE, COST, LATENCY). Every sentence adds value and is front-loaded with the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple one-parameter polling tool without output schema, the description covers purpose, usage conditions, and behavioral traits. Minor gap: no indication of expected return shape or error conditions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage for 'operation_id'. Description explains that operation_id identifies the in-flight async operation, adding semantic meaning beyond the raw schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the verb 'Query', the resource 'state of any in-flight async operation', and the key identifier 'operation_id'. It effectively distinguishes from sibling tools by focusing on polling async operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit WHEN TO USE (poll when no webhook callback) and WHEN NOT TO USE (not more than once per 10 seconds). Implies webhook as alternative for real-time updates, though not naming a sibling directly.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

handle_inboundA
Idempotent
Inspect

Receive, classify, and route inbound messages on behalf of an SMB. Classifies intent (booking request, cancellation, inquiry, complaint), enriches with context, and routes to the appropriate handler or escalation path.

EXAMPLE USER QUERIES THAT MATCH THIS TOOL: user: "Process this customer reply for me: 'Yes I want to book Tuesday'" -> call handle_inbound({"raw_message": "Yes I want to book Tuesday", "channel": "sms"})

WHEN TO USE: Use when an SMB needs inbound message triage — classifying incoming contact-form submissions, SMS replies, voicemails, or email inquiries. WHEN NOT TO USE: Do not use for outbound communications. Do not use for compliance-flagged recipient lists without verified opt-in records. COST: $varies per_inbound LATENCY: ~variesms EXECUTION: async_by_default (use get_outcome to retrieve result)

ParametersJSON Schema
NameRequiredDescriptionDefault
senderNo
smb_idYes
raw_messageYes
routing_rulesNoOptional override routing policy for this SMB
inbound_channelYes
received_at_isoNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses async execution, cost, latency, and general routing behavior. Annotations add idempotentHint and destructiveHint, no contradiction. But doesn't fully describe side effects (e.g., record creation) beyond classification/routing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is well-structured with clear sections (general purpose, example, when to use/not use). No wasted sentences, but slightly verbose in listing categories (booking request, cancellation, etc.).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 6 parameters and no output schema, the description covers general purpose and usage but lacks parameter guidance and expected output format. Good for high-level understanding but insufficient for parameter specification.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is only 17% (only routing_rules has description). The description does not explain most parameters (sender, smb_id, received_at_iso, etc.) beyond their names. The example shows raw_message and channel but no detail on others.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly defines the tool's function: 'Receive, classify, and route inbound messages on behalf of an SMB.' It specifies verb (handle), resource (inbound messages), and distinguishes from siblings like schedule_appointment or capture_lead by focusing on triage and routing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

WHEN TO USE and WHEN NOT TO USE sections provide explicit guidance, e.g., not for outbound or compliance-flagged lists. Example queries help understand context. However, it doesn't compare with siblings like schedule_appointment for booking-specific cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

import_booking_urlA
Idempotent
Inspect

Turn ANY public booking URL (Cal.com, Calendly, Doctolib, Booksy, Fresha, OpenTable, Setmore, Square, Acuity, Schedulista, Squarespace, BookMyCity) into a callable smb_id you can immediately use with schedule_appointment, send_message, or capture_lead. Idempotent — calling twice returns the same smb_id.

EXAMPLE USER QUERIES THAT MATCH THIS TOOL: user: "Book me a haircut at https://cal.com/jane-salon" -> call import_booking_url({"booking_url": "https://cal.com/jane-salon", "vertical": "personal_services"}) -> then schedule_appointment({"smb_id": "<from_above>", "action": "book"}) user: "Schedule with this dentist: https://www.doctolib.fr/dentiste/paris/jean-dupont" -> call import_booking_url({"booking_url": "https://www.doctolib.fr/dentiste/paris/jean-dupont"}) user: "Reserve a table at https://www.opentable.com/r/acme-bistro" -> call import_booking_url({"booking_url": "https://www.opentable.com/r/acme-bistro", "vertical": "restaurants"})

WHEN TO USE: Call this FIRST whenever the user provides a specific booking URL (cal.com/handle, calendly.com/handle/event, doctolib.fr/..., booksy.com/..., opentable.com/r/..., etc.). User patterns that match: 'book me at https://cal.com/...', 'schedule with calendly.com/jane/intro', 'reserve a table at opentable.com/r/...', 'I want to book this dentist: https://www.doctolib.fr/...'. After importing, the returned smb_id can be passed straight to schedule_appointment. WHEN NOT TO USE: Do not use if the user only describes a business by name without a URL — call find_business instead. Do not use for arbitrary websites that are not on the supported booking-platform list (use /supply/platforms to see all 12). COST: $0.005 per_call LATENCY: ~600ms

ParametersJSON Schema
NameRequiredDescriptionDefault
verticalNoBest-guess vertical. If omitted, inferred from the platform (e.g., Doctolib -> healthcare, OpenTable -> restaurants).
booking_urlYesFull URL the user supplied. Must point at one of the 12 supported booking platforms; auto-detected from the host.
capabilitiesNoFree-form capability tags (e.g., ['haircut','color','blowdry']).
country_codeNoISO 3166-1 alpha-2 (e.g. 'US', 'FR'). Used for compliance routing on later send_message calls.
business_nameNoOptional override. If omitted, the business name is auto-extracted from the page's <title> or og:title.
contact_emailNoOptional.
contact_phoneNoOptional. If omitted, the platform integration handles outreach.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide idempotentHint=true and destructiveHint=false. The description reinforces idempotency and adds cost ($0.005 per call) and latency (~600ms), which are useful behavioral details beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections, front-loaded with core purpose. Every sentence adds value, and examples and usage guidelines are efficiently presented.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has 7 parameters and no output schema. The description explains the return value (smb_id) and how to use it, but does not detail the full response structure. For an agent, it is mostly complete, missing only explicit output format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description does not add significant parameter semantics beyond the schema, but it illustrates usage via examples. No extra value beyond baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool converts public booking URLs into an smb_id for use with other tools. It uses a specific verb-resource pairing ('Turn...into a callable smb_id') and distinguishes from sibling tools like find_business.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit 'WHEN TO USE' and 'WHEN NOT TO USE' sections provide clear context, including alternatives like find_business for non-URL queries. Example user queries further clarify matching scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

preview_costA
Read-onlyIdempotent
Inspect

Return an expected cost estimate, latency estimate, and success-probability estimate for a proposed call before execution. Accuracy SLO: actual cost within ±5% of preview.

EXAMPLE USER QUERIES THAT MATCH THIS TOOL: user: "How much will this SMS cost me?" -> call preview_cost({"operation": "send_message", "params": {"channel_preference": "sms"}}) user: "Estimate the cost of booking via voice fallback" -> call preview_cost({"operation": "schedule_appointment"})

WHEN TO USE: Use before any operation when the agent is operating under a budget constraint and needs to decide whether to proceed. WHEN NOT TO USE: Do not use in a hot loop — cache the result for at least 60 seconds if repeating the same preview. COST: $varies per_call LATENCY: ~variesms

ParametersJSON Schema
NameRequiredDescriptionDefault
paramsYesThe same request body you would pass to the operation
operationYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnly and idempotent. Description adds accuracy SLO (±5%), caching recommendation, and mentions cost/latency vary. These details go beyond annotations, though it omits error behavior or authentication needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-organized with a clear summary, examples, usage sections, and cost/latency info. Every sentence adds value; no redundancy. Front-loaded with the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite good guidelines, no output schema is provided, and the description does not specify the return format (e.g., JSON structure). It mentions three output fields but not their representation. This leaves ambiguity for the agent, especially for a tool that returns complex estimates.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers only 50% of parameters (params has description, operation does not). The description provides concrete examples (e.g., 'send_message' with specific params) and user query mappings, adding substantial meaning to both parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns cost, latency, and success-probability estimates for a proposed call. It uses a specific verb ('return') and resource ('cost estimate'), and distinguishes from sibling tools like send_message which execute operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit 'WHEN TO USE' and 'WHEN NOT TO USE' sections provide clear guidance: use under budget constraints, cache results, avoid hot loops. Examples map user queries to tool invocations, helping the agent select correctly.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

schedule_appointmentA
DestructiveIdempotent
Inspect

Availability lookup, hold, confirm, reschedule, or cancel appointments with an SMB. Routes through the SMB's native booking system if available, falls back to voice AI or web form.

EXAMPLE USER QUERIES THAT MATCH THIS TOOL: user: "Book the haircut for next Tuesday at 3pm" -> call schedule_appointment({"smb_id": "smb_imp_abc", "action": "book", "service": "haircut"}) user: "Cancel my Friday appointment at smb_xyz" -> call schedule_appointment({"smb_id": "smb_xyz", "action": "cancel"}) user: "Reschedule my dental cleaning to next week" -> call schedule_appointment({"smb_id": "smb_imp_xyz", "action": "reschedule"})

WHEN TO USE: Use when an agent needs to book, reschedule, or cancel a specific appointment with a specific SMB. Requires a verified smb_id. WHEN NOT TO USE: Do not use for bulk scheduling. Do not use without a verified SMB — call find_business and verify_business first if needed. COST: $varies per_booking_attempt LATENCY: ~variesms EXECUTION: async_by_default (use get_outcome to retrieve result)

ParametersJSON Schema
NameRequiredDescriptionDefault
notesNo
actionYes
smb_idYes
serviceNo
customerNo
requested_timeNo
existing_appointment_idNoRequired for reschedule/cancel
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds cost, latency, and async execution details beyond annotations. It also explains routing logic. Annotations already indicate destructive and idempotent hints, and the description is consistent. Slight deduction for not clarifying idempotency nuances per action type.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections and front-loaded purpose. It includes examples which are helpful but slightly lengthy. Every sentence adds value, though some repetition exists (e.g., action types repeated in examples).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (7 params, nested objects, multiple actions, async execution), the description covers purpose, routing, prerequisites, cost, latency, and retrieval via get_outcome. It is thorough for an agent to understand and invoke correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is only 14% (only existing_appointment_id has a description). The description does not explain nested objects like customer, requested_time, or the notes field. Examples partially show usage of smb_id, action, and service but are insufficient for full parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly enumerates the actions (book, reschedule, cancel, check availability) and the target (appointments with SMBs). It distinguishes from siblings like import_booking_url and handle_inbound by specifying routing and prerequisites, and the examples further illustrate usage.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit 'WHEN TO USE' and 'WHEN NOT TO USE' sections state the tool requires a verified smb_id, should not be used for bulk scheduling, and that find_business/verify_business should be called first if needed. This provides clear decision criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

self_testA
Read-onlyIdempotent
Inspect

Live capability probe that verifies the service is healthy, each claimed operation is reachable, and supply network size is current. Use to verify integration before production use.

EXAMPLE USER QUERIES THAT MATCH THIS TOOL: user: "Run a health check before I send the broadcast" -> call self_test({})

WHEN TO USE: Use at agent startup, before high-stakes task sequences, or after receiving unexpected errors to check if the service is degraded. WHEN NOT TO USE: Do not call more than once per minute in production. COST: $varies free LATENCY: ~variesms

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations (readOnlyHint=true, idempotentHint=true, destructiveHint=false) already indicate it's safe. The description adds context about being a health check and verifying supply network size, going beyond annotations without contradiction. Cost and latency are noted (though non-specific).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with labeled sections (description, example, when to use/not use, cost, latency). Clear and easy to scan, though could be slightly more compact. No wasted sentences.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple health-check tool with no parameters or output schema, the description covers all essential information: purpose, usage, constraints, and example. It is fully self-contained and actionable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters and schema coverage is 100%, so the description adds no param details. Example call 'self_test({})' confirms no params needed. Baseline 4 is appropriate for a parameterless tool.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it's a 'live capability probe' for health verification, differentiating it from sibling tools like send_message or capture_lead. The verb 'self_test' combined with 'verify the service is healthy' makes the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit guidance on when to use (startup, high-stakes tasks, after errors) and when not to use (more than once per minute). It lacks direct comparison to sibling tools, but the context is clear enough for an agent to decide.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

send_messageA
DestructiveIdempotent
Inspect

Send an outbound message to an SMB or its customer across channels (SMS, email, chat, voice, push). Channel is abstracted — you specify intent and recipient; the service selects and falls back across channels.

EXAMPLE USER QUERIES THAT MATCH THIS TOOL: user: "Text the salon I'll be 10 minutes late" -> call send_message({"recipient_id": "smb_xyz", "channel_preference": "sms", "message": {"body": "Will be 10 minutes late."}, "country_code": "US"}) user: "Email the dentist about insurance" -> call send_message({"recipient_id": "smb_xyz", "channel_preference": "email", "message": {"body": "Do you accept Cigna?"}})

WHEN TO USE: Use for outbound business communication: appointment reminders, follow-ups, marketing offers (with confirmed opt-in), transactional messages, or inbound response handling. WHEN NOT TO USE: Do not use for OTP or critical transactional confirmations — use send_transactional_confirmation instead. Do not use for recipients without consent where required (SMS marketing, EU recipients). COST: $varies per_message LATENCY: ~variesms EXECUTION: sync_fast (use get_outcome to retrieve result)

ParametersJSON Schema
NameRequiredDescriptionDefault
contentYes
recipientYes
send_at_isoNoSchedule for future delivery; omit for immediate
message_typeYes
preferred_channelNoauto
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=false, destructiveHint=true, idempotentHint=true. Description adds that channel is abstracted with fallback, execution is sync_fast (result via get_outcome), and includes cost/latency. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is well-organized with sections (EXAMPLE, WHEN TO USE/NOT, COST, LATENCY, EXECUTION). It is front-loaded with purpose. However, the 'COST' and 'LATENCY' lines are minimal placeholders ('$varies per_message', '~variesms') that could be omitted or made more informative.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (5 parameters, nested objects, no output schema), the description covers usage guidelines, examples, exclusions, cost, latency, and execution mode. It mentions that results are retrievable via get_outcome, addressing the lack of output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is low (20%), but the description compensates with example JSON calls that illustrate usage of key parameters like recipient_id, channel_preference, message, and country_code. However, not all parameters (e.g., send_at_iso, preferred_channel enum) are explained in prose, though the schema itself has descriptions for some.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool sends outbound messages across multiple channels with channel abstraction. It is specific about the verb ('send'), resource ('outbound message to SMB or its customer'), and scope ('across channels: SMS, email, chat, voice, push'). It distinguishes from siblings by explicitly naming send_transactional_confirmation in the WHEN NOT TO USE section.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit WHEN TO USE and WHEN NOT TO USE sections provide clear guidance on appropriate contexts (e.g., appointment reminders, transactional messages) and exclusions (e.g., not for OTP/critical confirmations, not without consent). It also mentions cost, latency, and execution mode, aiding in tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

send_transactional_confirmationA
DestructiveIdempotent
Inspect

Idempotent transactional messages: OTPs, booking confirmations, payment receipts, cancellation notices. Guaranteed delivery via redundant channels.

EXAMPLE USER QUERIES THAT MATCH THIS TOOL: user: "Send the booking confirmation receipt to my email" -> call send_transactional_confirmation({"recipient_id": "user@example.com", "channel_preference": "email", "confirmation_type": "booking"})

WHEN TO USE: Use for any message that MUST be delivered reliably — OTPs, booking confirmations, receipts. Do not use for marketing. WHEN NOT TO USE: Do not use for marketing or promotional messages. Do not use for conversational messages. COST: $varies per_message LATENCY: ~variesms EXECUTION: sync_fast (use get_outcome to retrieve result)

ParametersJSON Schema
NameRequiredDescriptionDefault
dataYesType-specific payload; e.g., {otp_code} for otp, {appointment_time, smb_name} for booking_confirmation
recipientYes
confirmation_typeYes
preferred_channelNosms
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond annotations (idempotent, destructive), the description adds that delivery is guaranteed via redundant channels, execution is sync_fast, and results are retrieved via get_outcome. However, the vague cost/latency values ("varies") provide little practical insight.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is structured with sections but includes unnecessary details like "COST: $varies" and "LATENCY: ~variesms" that add no value. The example takes up space and is inaccurate. The core guidance is clear but could be more concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (4 params, nested objects, no output schema), the description covers when to use, execution mode, and result retrieval. However, it lacks explanation of the recipient object structure, the data payload per confirmation_type, and the default channel behavior, leaving gaps for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has only 25% description coverage, and the main description does not explain the parameters beyond a flawed example. The example uses incorrect field names ("recipient_id", "channel_preference", "booking") that do not match the schema, which can mislead the agent. No further parameter guidance is provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it sends transactional messages like OTPs, confirmations, receipts, and cancellations. It distinguishes from siblings by explicitly excluding marketing and conversational messages. However, the example uses mismatched field names (e.g., "recipient_id" instead of "recipient"), which could confuse an AI agent.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit WHEN TO USE and WHEN NOT TO USE sections, clearly stating that the tool is for reliable transactional messages only. It directly excludes marketing, promotional, and conversational messages, which differentiates it from siblings like send_message.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

verify_businessA
Read-onlyIdempotent
Inspect

Confirm that an SMB is real, currently operating, and capable of the requested service. Performs a live capability probe against the business's channel.

EXAMPLE USER QUERIES THAT MATCH THIS TOOL: user: "Confirm smb_imp_abc actually does emergency plumbing" -> call verify_business({"smb_id": "smb_imp_abc", "capability_to_verify": "emergency_plumbing"})

WHEN TO USE: Use before sending communications or scheduling if you have an unverified SMB identifier, or if the agent's task requires confirmed capability (e.g., 'I need to be sure they do emergency plumbing'). WHEN NOT TO USE: Do not use if the SMB was returned from find_business within the last 24 hours — those results are already verified. COST: $varies per_call LATENCY: ~variesms

ParametersJSON Schema
NameRequiredDescriptionDefault
smb_idYes
capability_to_verifyNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, destructiveHint=false. The description adds that it 'performs a live capability probe' and mentions cost and latency (though vague). No contradiction with annotations, and the additional context is useful but not exhaustive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is concise: a single purpose sentence, an example, a when-to-use section, and cost/latency notes. Each sentence adds value, and the key information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, but the tool's output (verification result) is implicitly clear from the purpose. With annotations covering safety, the description is fairly complete for a simple verification tool. Could mention return format, but not essential.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 0% description coverage, but the description compensates with an example demonstrating both parameters (smb_id and capability_to_verify) and their roles. The WHEN TO USE text clarifies that smb_id is an unverified identifier. While not detailing formats or constraints, the provided context is sufficient for typical use.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description states 'Confirm that an SMB is real, currently operating, and capable of the requested service. Performs a live capability probe against the business's channel.' This provides a specific verb ('confirm') and resource ('SMB'), and distinguishes from sibling tools like find_business which returns unverified results.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly provides 'WHEN TO USE' and 'WHEN NOT TO USE', including a direct alternative (find_business within 24 hours). Also includes example user queries that match the tool, offering clear contextual guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.