Servicialo MCP Server
Server Details
Open protocol for booking and scheduling professional services via AI agents
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- servicialo/mcp-server
- GitHub Stars
- 1
- Server Listing
- Servicialo
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
94 toolsadmin_create_serviceAInspect
Add a bookable service to an organization. Use after admin_create_organization. Auto-discoverable by default. If the org has exactly one active provider, the service is auto-assigned to them. With multiple providers, use service_assign_provider to assign manually — unassigned services block admin_toggle_discoverable. Next step: admin_set_availability to configure the provider schedule. Requires X-Org-Api-Key header.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | ||
| price | Yes | ||
| apiKey | No | ||
| orgSlug | Yes | ||
| currency | No | ||
| vertical | No | ||
| description | No | ||
| duration_minutes | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description carries full disclosure burden effectively. Reveals auto-assignment logic (single provider auto-assignment), default discoverability state ('Auto-discoverable by default'), and blocking side effects. Includes auth requirement ('Requires X-Org-Api-Key header'). Missing only error handling or rate limit details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly structured with front-loaded purpose, followed by workflow sequence, behavioral constraints, and auth requirements. No redundant words; every sentence conveys distinct operational intelligence. Dense but scannable.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately covers the multi-step workflow context and tool relationships, but insufficient for the parameter complexity (8 params, 0% schema coverage). With no output schema, the description should ideally explain what successful creation returns or validation rules, which it omits.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage, so description must compensate but fails to document most parameters. Mentions 'X-Org-Api-Key' (likely mapping to 'apiKey' param) but provides no semantics for required fields like 'orgSlug', 'duration_minutes', 'vertical', or 'currency'. Does not clarify that 'apiKey' appears to be a header despite being in the body schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Opens with specific verb-resource combination ('Add a bookable service to an organization') and distinguishes from sibling 'service_create' by embedding this in the explicit admin workflow ('Use after admin_create_organization'). Clearly identifies the domain and scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit prerequisites ('Use after admin_create_organization'), post-requisites ('Next step: admin_set_availability'), and clear conditional logic for when to use sibling tool 'service_assign_provider' ('With multiple providers...'). Also warns about blocking conditions ('unassigned services block admin_toggle_discoverable').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
admin_list_providersAInspect
List active providers (professionals) for an organization. Use this to get providerId before calling admin_set_availability. The org owner is auto-provisioned as a provider. Requires X-Org-Api-Key header.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| orgSlug | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses auth requirements ('Requires X-Org-Api-Key header') and data behavior ('org owner is auto-provisioned as a provider'), but lacks details on pagination, rate limits, or return structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with zero waste: purpose (sentence 1), usage guideline (sentence 2), behavioral context (sentence 3), and auth requirement (sentence 4). Information is front-loaded and every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 2-parameter list tool without output schema, the description adequately covers invocation prerequisites and mentions providerId as a key return field. However, it could briefly describe the return structure (e.g., array of provider objects) to fully compensate for missing output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate. It implicitly maps 'X-Org-Api-Key header' to the apiKey parameter and 'organization' to orgSlug, but does not explicitly document parameter formats, validation rules, or where to obtain the orgSlug value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('List') + resource ('providers/professionals') + scope ('active', 'for an organization'), clearly distinguishing it from sibling tools like provider_get (single record) or provider_create (mutation).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use the tool ('Use this to get providerId before calling admin_set_availability'), providing clear workflow sequencing and directly referencing the sibling tool that depends on this data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
admin_set_availabilityAInspect
Replace the weekly availability schedule for a provider (not additive — overwrites all existing blocks). Get providerId from admin_list_providers first. Schedule uses day names and HH:MM times. Requires X-Org-Api-Key header.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| orgSlug | Yes | ||
| schedule | Yes | ||
| providerId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses destructive overwrite behavior, authentication requirements, and workflow dependencies. Minor deduction for not describing return values or error states on failure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with zero waste. Front-loaded with critical destructive warning. Each sentence delivers distinct value: purpose/warning, prerequisite, format specification, and authentication. No redundant content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive admin operation with 4 parameters, no annotations, and no output schema, the description covers purpose, prerequisites, destructive nature, and parameter formats effectively. Lacks only return value description and error handling guidance.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage. Description compensates partially: explains providerId source, schedule format ('day names and HH:MM times'), and implies apiKey maps to X-Org-Api-Key header. However, orgSlug is undocumented and apiKey parameter semantics remain ambiguous.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb ('Replace') + resource ('weekly availability schedule') + target ('provider'). The parenthetical 'not additive — overwrites all existing blocks' clearly distinguishes this from additive update tools and the sibling getter availability_get_provider_schedule.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit prerequisites ('Get providerId from admin_list_providers first'), destructive behavior warning ('overwrites all existing blocks'), and auth requirement ('Requires X-Org-Api-Key header'). Clearly indicates when to use (full replacement) vs. when not to (additive changes).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
admin_toggle_discoverableAInspect
Publish or unpublish an organization in the Servicialo global registry. Use as the last step after configuring services and availability. Note: first call after admin_create_organization may return registry_updated: false — call again to confirm. Requires X-Org-Api-Key header.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| orgSlug | Yes | ||
| discoverable | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and delivers: it discloses the retry requirement ('first call...may return registry_updated: false — call again to confirm'), mentions the specific auth header needed (X-Org-Api-Key), and hints at return value structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four well-structured sentences with zero waste: purpose statement, usage timing, behavioral quirk with retry logic, and auth requirement. Information is front-loaded and every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema and zero parameter descriptions in the schema, the description captures the critical implementation details (retry behavior, auth header) and workflow context needed to use this toggle effectively after organization creation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate. While it conceptually explains the operation (publishing/unpublishing an org), it does not explicitly map these concepts to the parameter names (orgSlug, discoverable, apiKey) or explain that discoverable=true means publish.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with specific verbs (Publish/unpublish) and clear resource scope (organization in the Servicialo global registry), distinguishing it from sibling admin tools that handle creation or configuration rather than visibility toggling.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Use as the last step after configuring services and availability' establishing clear workflow sequencing. Also references admin_create_organization in the retry note, implicitly guiding the agent on tool ordering.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
agendas_createAInspect
Create a public agenda — a shareable booking page where external clients can self-book appointments. Links to a specific provider and/or service. The agenda gets a public URL at /{orgSlug}/agenda/{slug}. Create this after services and availability are configured. Without a public agenda, clients can only be booked via the API or dashboard.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | ||
| title | No | ||
| apiKey | No | ||
| orgSlug | Yes | ||
| isActive | No | ||
| isPublic | No | ||
| description | No | ||
| proveedorId | No | ||
| showSessions | No | ||
| allowComments | No | ||
| showClientNames | No | ||
| bookingFlowOrder | No | ||
| defaultDurations | No | ||
| showProviderList | No | ||
| assignmentStrategy | No | ||
| requiresPrepayment | No | ||
| allowGroupSelection | No | ||
| allowSameDayBooking | No | ||
| allowOnDemandBooking | No | ||
| serviceSelectionMode | No | ||
| maxAdvanceBookingDays | No | ||
| providerSelectionMode | No | ||
| cancellationPolicyType | No | ||
| cancellationPenaltyType | No | ||
| showProviderUtilization | No | ||
| cancellationPenaltyValue | No | ||
| minAdvanceBookingMinutes | No | ||
| cancellationDeadlineMinutes | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and successfully discloses key behavioral traits: it creates a public URL at a specific path pattern (/{orgSlug}/agenda/{slug}), has dependencies on pre-existing configuration (services/availability), and enables self-service booking. It does not mention error handling, idempotency, or rate limits, preventing a perfect score.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with zero waste. It front-loads the core definition, followed by URL structure, prerequisites, and usage comparison. Every sentence provides distinct, non-redundant information that aids tool selection.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the high complexity (9 parameters, 0% schema coverage, no annotations, no output schema), the description adequately explains the business concept and workflow integration but has clear gaps in parameter documentation. It successfully explains 'what' and 'when' but not 'how' regarding the parameter interface.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate significantly. While it references orgSlug and slug via the URL path example and implies proveedorId through 'Links to a specific provider,' six other parameters (title, apiKey, isActive, isPublic, description, allowComments) remain completely undocumented. This falls short of adequately compensating for the schema gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly defines the tool as creating 'a public agenda — a shareable booking page where external clients can self-book appointments,' using a specific verb and resource. It clearly distinguishes this from sibling tools by contrasting it with API/dashboard booking methods and linking to provider/service entities created by other tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit prerequisites ('Create this after services and availability are configured') and clear alternative pathways ('Without a public agenda, clients can only be booked via the API or dashboard'). This directly guides the agent on when to use this tool versus booking_create or admin dashboard functions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
agendas_deleteAInspect
Delete a public agenda permanently. Cascades to related sessions booked through this agenda, comments, and service configs. Requires confirm: true. Cannot be undone.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| confirm | Yes | ||
| orgSlug | Yes | ||
| agendaId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Excellently discloses cascade effects ('related sessions... comments... service configs'), validation logic ('confirm: true'), and irreversibility. Zero contradiction with implied destructive nature.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: action/scope, cascade effects, and confirmation/irreversibility. Critical safety information (cascades, confirmation) is front-loaded appropriately for a destructive operation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive 4-parameter tool with no output schema and zero annotation coverage, description covers essential domain logic (cascading deletes, confirmation gate, permanence). Could improve by indicating success/failure response patterns.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage. Description compensates partially by explaining the 'confirm' parameter's validation requirement ('Requires confirm: true'), but does not clarify orgSlug, agendaId, or apiKey semantics. Baseline compensation for critical parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description opens with specific verb 'Delete' + resource 'public agenda' + scope 'permanently', clearly distinguishing from sibling tools agendas_create, agendas_get, agendas_list, and agendas_update.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
States explicit prerequisite 'Requires confirm: true' and warns 'Cannot be undone', guiding safe usage. Lacks explicit naming of alternatives (e.g., agendas_update for modifications) but irreversibility warning strongly implies caution criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
agendas_getAInspect
Get complete details of a public agenda by ID. Returns all configuration including booking flow (service_first, provider_first, auto), selection modes, assignment strategy, booking policies (advance booking, same-day, on-demand), cancellation policies, privacy settings, linked provider/service, and session count. Use before agendas_update to inspect current settings.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| orgSlug | Yes | ||
| agendaId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations and no output schema, the description comprehensively discloses return content: booking flow types (service_first, provider_first, auto), booking policies (advance, same-day, on-demand), cancellation policies, privacy settings, and session count. Also clarifies this operates on 'public' agendas, disclosing access scope.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded purpose statement followed by comprehensive return value enumeration and usage guideline. The second sentence is lengthy but every clause documents a specific configuration field that would otherwise be unknown without an output schema. No redundant or filler text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complex agenda domain (multiple sibling tools) and absence of output schema, the description admirably documents the complete return payload structure. However, the lack of parameter semantics for orgSlug and apiKey (with 0% schema coverage) leaves a documentation gap for required inputs.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage. Description mentions 'by ID' implying agendaId, and 'public agenda' hints at orgSlug context, but provides no explanation for apiKey (the third parameter). With zero schema coverage, the description only partially compensates for missing parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb ('Get') + resource ('agenda') + scope ('complete details...by ID'). Explicitly distinguishes from sibling agendas_update by stating this should be used 'before agendas_update to inspect current settings', clarifying the read vs. write relationship.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use guidance ('Use before agendas_update to inspect current settings'), establishing the prerequisite relationship with the update tool. Lacks explicit 'when not to use' exclusions, though the sibling contrast is implied.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
agendas_listBInspect
List public agendas for an organization. Returns agendas with their provider, service, and session counts.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| orgSlug | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses return payload structure ('provider, service, and session counts'), but fails to mention safety profile (read-only vs mutation), pagination behavior, or error scenarios. The term 'public' hints at filtering logic but isn't elaborated.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is optimally concise with two sentences and zero redundancy. The first sentence establishes the operation, and the second adds value by previewing the return structure without wasting words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema and annotations, the description partially compensates by describing return fields. However, it omits pagination details, authentication requirements for the apiKey parameter, and the distinction between public and private agendas, leaving operational gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate fully. While 'for an organization' implicitly clarifies the 'orgSlug' parameter, the 'apiKey' parameter is completely undocumented in the text, leaving critical authentication semantics unexplained.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('List'), resource ('public agendas'), and scope ('for an organization'). However, it does not explicitly differentiate from the sibling 'agendas_create' tool or clarify what makes an agenda 'public' versus private/internal.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'agendas_create' or other listing tools. There are no prerequisites mentioned (e.g., needing the orgSlug beforehand) or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
agendas_updateAInspect
Update a public agenda’s configuration. Partial update — only provided fields are changed. Supports modifying: title, description, visibility (isPublic/isActive), booking flow order (service_first/provider_first/auto), selection modes for service and provider (required/optional/auto/hidden), assignment strategy (manual/round_robin/least_booked/most_available/priority/random), booking policies (min/max advance, same-day, on-demand), cancellation policies (type, deadline, penalty), privacy flags (showSessions, showClientNames, showProviderUtilization, showProviderList), prepayment, and provider/service linking. When a provider is assigned, their services are auto-linked to the agenda.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | No | ||
| title | No | ||
| apiKey | No | ||
| orgSlug | Yes | ||
| agendaId | Yes | ||
| category | No | ||
| isActive | No | ||
| isPublic | No | ||
| expiresAt | No | ||
| isDefault | No | ||
| description | No | ||
| proveedorId | No | ||
| showSessions | No | ||
| allowComments | No | ||
| showClientNames | No | ||
| bookingFlowOrder | No | ||
| defaultDurations | No | ||
| showProviderList | No | ||
| assignmentStrategy | No | ||
| requiresPrepayment | No | ||
| allowGroupSelection | No | ||
| allowSameDayBooking | No | ||
| allowOnDemandBooking | No | ||
| serviceSelectionMode | No | ||
| maxAdvanceBookingDays | No | ||
| providerSelectionMode | No | ||
| cancellationPolicyType | No | ||
| cancellationPenaltyType | No | ||
| showProviderUtilization | No | ||
| cancellationPenaltyValue | No | ||
| minAdvanceBookingMinutes | No | ||
| cancellationDeadlineMinutes | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full disclosure burden. It successfully explains the partial update semantics and the auto-linking side effect when providers are assigned. However, it lacks disclosure of error behavior, authorization requirements, or what happens if the agendaId doesn't exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with purpose and partial-update warning. The long enumerated list of configurable fields is information-dense and justified given the 0% schema coverage and 32 parameters. No wasted words, though necessarily lengthy. Structure flows from general behavior to specific field categories to side effects.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex 32-parameter mutation tool with no output schema, the description covers the domain logic thoroughly (booking flows, cancellation policies, assignment strategies). It explains the partial update contract and auto-linking behavior. Only missing generic operational details like error states or return value structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage and 32 parameters, the description compensates well by categorizing parameters (visibility, booking flow order, selection modes, assignment strategy, etc.) and listing valid enum values for complex fields. It misses documenting some parameters (slug, apiKey, category, expiresAt, allowComments, defaultDurations, allowGroupSelection, cancellation penalty specifics), but covers the majority of business-logic-critical fields.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with 'Update a public agenda's configuration' providing a specific verb (update) and resource (agenda configuration). It clearly distinguishes from siblings (agendas_create, agendas_delete, agendas_get) through the update verb and by specifying this handles existing agenda configuration changes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Partial update — only provided fields are changed' which is critical usage guidance for PATCH-like semantics. Also notes the side effect that 'When a provider is assigned, their services are auto-linked.' Missing explicit guidance on when to use vs. agendas_create, but the partial update instruction provides clear behavioral context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
availability_get_provider_scheduleAInspect
Get the configured weekly availability schedule for a provider (not free slots, but the base configuration). Use admin_set_availability to modify.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| orgSlug | Yes | ||
| providerId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It successfully clarifies this returns persistent configuration data (base weekly schedule) rather than computed availability. However, lacks details on authentication requirements (despite apiKey param), rate limits, or whether the schedule is returned as time blocks or rules.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two tightly constructed sentences with zero waste. Front-loads the core operation, immediately clarifies the scope distinction (not free slots), and ends with the mutating counterpart reference. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read operation with 3 simple parameters and no output schema, the description successfully explains the conceptual model (configuration vs computed slots) and operational context. Could improve by noting the required org/provider hierarchy explicitly, but adequately covers the tool's function given its simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage requiring description to compensate. Description implies 'provider' maps to providerId and organizational context to orgSlug, but does not explicitly document any parameters or the apiKey authentication mechanism. Provides minimal semantic context for the arguments.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: states 'Get the configured weekly availability schedule' (verb + resource), explicitly distinguishes from sibling 'availability_get_slots' by clarifying 'not free slots, but the base configuration', and identifies the target entity (provider).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit alternative 'Use admin_set_availability to modify' clarifying this tool is read-only relative to its mutating sibling. The 'not free slots' caveat helps guide selection vs 'availability_get_slots', though could more explicitly state when to prefer each.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
availability_get_slotsAInspect
Query available time slots within a date range. Agenda-aware: without clientId, filters by the org default public agenda — each org decides which services to expose. With clientId, resolves the client titular provider and returns their full service catalog. Five modes: (1) orgSlug only — slots from the public agenda grouped by service, provider auto-assigned at booking; (2) orgSlug + clientId — resolves titular provider if set, falls back to agenda; (3) orgSlug + agendaId — slots for a specific agenda; (4) serviceId — slots for all providers assigned to that service; (5) providerId — slots for a specific provider. Modes 1–3 hide provider details. Use before booking_create.
| Name | Required | Description | Default |
|---|---|---|---|
| date | No | ||
| apiKey | No | ||
| dateTo | No | ||
| orgSlug | Yes | ||
| agendaId | No | ||
| clientId | No | ||
| dateFrom | No | ||
| duration | No | ||
| timezone | No | ||
| serviceId | No | ||
| providerId | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and delivers substantial behavioral context: it discloses the 'Agenda-aware' filtering logic, explains that modes 1-3 hide provider details while others expose them, and notes that providers are auto-assigned in mode 1. Missing minor details on rate limits or error states.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information-dense yet well-structured: front-loaded with core purpose, followed by agenda logic, enumerated modes (1-5), and closing usage hint. Every sentence earns its place; no redundant text despite covering complex multi-mode behavior.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 11 parameters, zero schema coverage, and no output schema, the description adequately covers the primary business logic through the five modes. However, it lacks description of the return format (what constitutes a 'slot' object) or error handling, which would be necessary for complete agent autonomy.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Given 0% schema description coverage across 11 parameters, the description compensates effectively by explaining the five operational modes which map to parameter combinations (orgSlug, clientId, agendaId, serviceId, providerId). It implies date range usage but omits specifics on apiKey, timezone, duration, and the distinction between 'date' vs 'dateFrom/dateTo' fields.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb-object ('Query available time slots') and immediately qualifies the scope ('within a date range'). It distinguishes this tool from siblings by detailing its unique 'Agenda-aware' logic and five distinct operational modes, clearly differentiating it from booking_create (mentioned as a follow-up step) and availability_get_provider_schedule.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit sequencing guidance ('Use before booking_create') and detailed parameter-combination logic through the five modes. However, it lacks explicit contrast with the sibling tool 'availability_get_provider_schedule' or guidance on when to prefer one over the other, and omits prerequisite warnings (e.g., apiKey requirements).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
booking_cancelBInspect
Cancel an existing session. Optionally applies cancellation policy charges. Requires confirm: true.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| reason | Yes | ||
| confirm | Yes | ||
| orgSlug | Yes | ||
| sessionId | Yes | ||
| cancelledBy | No | ||
| applyCancellationPolicy | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses important behavioral traits: the optional application of cancellation policy charges (financial side effect) and the confirmation requirement (safety mechanism). However, it omits whether the action is reversible, what notifications are triggered, or the exact consequence to the session record.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of three short, front-loaded sentences with zero redundancy. Each sentence conveys distinct, essential information (action, optional charges, confirmation requirement) without filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive operation with 7 undocumented parameters and no output schema, the description covers the minimum (action, safety confirmation, policy option) but leaves significant gaps. It fails to explain the 'cancelledBy' enum semantics, the purpose of the 'reason' field, or what success/failure looks like.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Given 0% schema description coverage across 7 parameters, the description inadequately compensates. It explicitly references 'confirm' and 'applyCancellationPolicy' (2 parameters) and implies 'sessionId', but provides no context for critical required fields like 'reason', 'orgSlug', the 'cancelledBy' enum values, or 'apiKey'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the core action ('Cancel an existing session') with a specific verb and resource. It distinguishes this from creation-focused siblings like booking_create, though it doesn't explicitly clarify the difference between this and booking_update_status.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides one critical usage constraint ('Requires confirm: true'), which acts as a safety check. However, it lacks explicit guidance on when to use this tool versus booking_update_status or prerequisites like authentication requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
booking_createAInspect
Create a new session/appointment for a client. providerId is optional — if omitted, the system auto-assigns a provider using the agenda assignment strategy (round_robin, least_booked, etc.). When a client has a titular provider, that provider is preferred automatically. Without providerId and without publicAgendaId, the org default public agenda is used. Preconditions: (1) service must exist and be active, (2) client must exist (use client_create first). Use availability_get_slots to find valid time slots before calling this. Set retroactive: true to register past sessions (skips slot validation, sets status to completed by default). Use autoCharge: true with retroactive to auto-generate the charge. Retroactive sessions are tagged with self_declared provenance. Max 365 days in the past. Past dates are auto-detected as retroactive — the retroactive flag is optional (system infers it from scheduledAt).
| Name | Required | Description | Default |
|---|---|---|---|
| notes | No | ||
| apiKey | No | ||
| orgSlug | Yes | ||
| clientId | Yes | ||
| duration | No | ||
| modalidad | No | ||
| serviceId | Yes | ||
| autoCharge | No | ||
| providerId | No | ||
| retroactive | No | ||
| scheduledAt | Yes | ||
| idempotencyKey | No | ||
| publicAgendaId | No | ||
| retroactiveStatus | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description carries full burden and adds substantial context: auto-assignment strategies (round_robin, least_booked), titular provider preference logic, default agenda fallbacks, and strict preconditions. Minor gap: doesn't explicitly confirm destructive nature or idempotency behavior despite idempotencyKey parameter.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four dense sentences with zero waste: purpose front-loaded, followed by provider assignment logic, agenda fallback rules, and preconditions/workflow. No redundant phrasing despite covering complex conditional logic.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given high complexity (11 params, 0% schema coverage, no annotations) and lack of output schema, description thoroughly covers input validation logic and workflow integration. Minor deduction for not describing return values or success confirmation, which would be helpful without an output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage. Description compensates for critical business logic parameters (providerId, publicAgendaId, clientId, serviceId) explaining optional vs required behaviors and system defaults. However, leaves 6 parameters (apiKey, orgSlug, duration, notes, idempotencyKey, modalidad) completely undocumented, creating gaps for auth, timing, and retry semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Opens with specific verb-resource pair ('Create a new session/appointment') and clearly distinguishes from sibling tools by describing singular booking creation (vs. booking_create_recurring) and referencing the booking lifecycle (cancel, reschedule) via workflow guidance.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly names prerequisite tools ('use client_create first', 'service must exist') and specifies the correct sibling to call beforehand ('Use availability_get_slots to find valid time slots before calling this'), providing clear when-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
booking_create_recurringAInspect
Create recurring sessions (e.g. weekly therapy). Generates multiple individual sessions linked by a recurrence series ID. Max 52 occurrences.
| Name | Required | Description | Default |
|---|---|---|---|
| notes | No | ||
| apiKey | No | ||
| confirm | Yes | ||
| orgSlug | Yes | ||
| clientId | Yes | ||
| timezone | Yes | ||
| serviceId | No | ||
| providerId | Yes | ||
| recurrence | Yes | ||
| skipConflicts | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It effectively communicates that the tool 'Generates multiple individual sessions linked by a recurrence series ID' (critical behavioral trait) and discloses the 'Max 52 occurrences' constraint. It misses idempotency and conflict handling details, but covers the essential mutation behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The three-sentence structure is perfectly efficient with zero waste: sentence one defines purpose with example, sentence two explains the key behavioral mechanism (series ID generation), and sentence three states the critical constraint. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex tool with 10 parameters, 0% schema coverage, nested recurrence objects, and no output schema, the description is insufficient. It explains the recurrence concept but omits documentation for the majority of parameters, return structure, and conflict resolution behavior (despite 'skipConflicts' existing in the schema).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Given 0% schema description coverage and 10 parameters including complex nested objects (recurrence), the description fails to compensate adequately. While it mentions the 52-occurrence limit (mapping to endCondition.count), it provides no semantics for required fields like 'confirm', 'orgSlug', 'clientId', 'providerId', or 'skipConflicts', leaving most parameters undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Create recurring sessions') and resource, with the parenthetical example '(e.g. weekly therapy)' effectively distinguishing it from the sibling tool 'booking_create' which handles single sessions. It specifies the key differentiator: generating multiple linked sessions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description implies usage through the 'recurring' qualifier and 'weekly therapy' example, it lacks explicit guidance on when to select this tool versus the sibling 'booking_create' for single sessions. No prerequisites or exclusion criteria are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
booking_getBInspect
Get complete details of a session/appointment by its ID, including client, provider, service, financial, and delivery proof information.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| orgSlug | Yes | ||
| sessionId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It adds valuable context by enumerating return data categories (client, provider, service, financial, delivery proof), which compensates partially for the missing output schema. However, it lacks disclosure of safety traits (read-only status), error behaviors, or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single, efficiently structured sentence that front-loads the action ('Get complete details') and appends specific return value categories without waste. Every clause earns its place by distinguishing scope or content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (3 params, no nested objects) and lack of annotations/output schema, the description adequately covers the return payload composition but remains incomplete regarding parameter documentation and operational safety characteristics.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate. It only implicitly references sessionId via 'by its ID', but provides no semantics for required parameter orgSlug or optional apiKey, leaving critical parameters undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'complete details of a session/appointment by its ID', using a specific verb and resource. It distinguishes from siblings like booking_list (which lacks ID filtering) and client_get/provider_get (different resources), though it doesn't explicitly contrast with other booking operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives (e.g., 'use booking_list if you don't have the session ID'). No prerequisites or conditions are mentioned, leaving the agent to infer usage from the parameter schema alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
booking_listBInspect
List sessions for an organization with filters by provider, client, service, status, and date range. Supports cursor-based pagination.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| apiKey | No | ||
| cursor | No | ||
| dateTo | No | ||
| status | No | ||
| orderBy | No | ||
| orgSlug | Yes | ||
| clientId | No | ||
| dateFrom | No | ||
| serviceId | No | ||
| providerId | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses cursor-based pagination behavior, which is valuable. However, it fails to clarify read-only safety, rate limits, return value structure, or what occurs when filters are omitted (e.g., returns all sessions vs. error).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. Front-loaded with core functionality (listing sessions with filters) followed by pagination support. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a basic listing tool but incomplete given the complexity (11 parameters, no output schema). Covers primary filtering and pagination but lacks required parameter emphasis, return value description, and error conditions expected for a query tool with numerous filter combinations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description partially compensates by enumerating the filterable fields (provider, client, service, status, date range). However, it omits semantics for 5 other parameters including the required 'orgSlug', 'apiKey' (authentication), 'cursor' (pagination mechanics), 'limit', and 'orderBy'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('List') and resource ('sessions'), with explicit mention of available filters. However, it does not explicitly distinguish from sibling 'booking_get' (single retrieval vs. list) or clarify when to use this versus other booking tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives like 'booking_get' for single-record retrieval. Does not mention prerequisites like the required 'orgSlug' parameter or authentication requirements despite 'apiKey' being present in the schema.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
booking_rescheduleAInspect
Reschedule a session to a new time. Cancels the original and creates a new one. Requires confirm: true.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| reason | No | ||
| confirm | Yes | ||
| orgSlug | Yes | ||
| sessionId | Yes | ||
| newProviderId | No | ||
| newScheduledAt | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description carries full burden and successfully discloses the destructive-composite behavior (cancel+create rather than true update) and confirmation requirement. Could improve by mentioning idempotency or error scenarios.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three well-structured sentences with zero waste: purpose (sentence 1), mechanism (sentence 2), and constraint (sentence 3). Information is front-loaded appropriately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for basic invocation given the complexity of 7 parameters and mutation behavior, but gaps remain in parameter documentation and expected return value given no output schema exists.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema coverage, description must compensate for 7 parameters but only implicitly covers newScheduledAt ('new time') and explicitly covers confirm. Critical parameters like orgSlug, sessionId, and newProviderId remain undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb (reschedule) and resource (session), and distinguishes from siblings by explaining the composite nature (cancels original + creates new) versus simple updates or cancellations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides the critical safety constraint 'Requires confirm: true', but lacks explicit guidance on when to use this versus booking_update_status or booking_cancel (e.g., time changes vs status changes).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
booking_update_statusAInspect
Advance a session through the Servicialo lifecycle: confirm, start, complete, deliver, or mark as no-show.
| Name | Required | Description | Default |
|---|---|---|---|
| notes | No | ||
| action | Yes | ||
| apiKey | No | ||
| orgSlug | Yes | ||
| sessionId | Yes | ||
| noShowType | No | ||
| deliveryType | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the lifecycle nature and valid state transitions, implying a state machine. However, it omits mutation safety details, error handling for invalid transitions, and whether operations are idempotent or reversible.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with zero waste. Front-loaded with the core verb ('Advance'), followed by scope ('Servicialo lifecycle'), and specific enumerated actions. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Insufficient for a state machine tool with conditional parameters (noShowType/deliveryType dependent on action) and no output schema or annotations. Missing: return value description, conditional parameter logic, and error states for invalid transitions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, requiring heavy description compensation. While it explains the action enum values (confirm, start, etc.), it completely ignores six other parameters including critical conditional fields like noShowType and deliveryType, failing to explain they are required only for specific actions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool advances a session through the Servicialo lifecycle with specific verbs (confirm, start, complete, deliver, no-show). It effectively distinguishes from sibling tools like booking_cancel and booking_reschedule by focusing on state progression rather than termination or time modification.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Lists the five valid lifecycle actions explicitly, providing clear context for when to use this tool (state transitions). However, it lacks explicit exclusions or guidance on when to use booking_cancel versus the no_show action, or prerequisites for valid state transitions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cierre_cerrar_orgAInspect
Close the organizational period. Requires ALL active clients with historialCompleto=true to be closed first. Freezes the period.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| orgSlug | Yes | ||
| periodo | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses critical side effect ('Freezes the period') and prerequisite check, but lacks error behavior or idempotency details given no annotations provided.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three tight sentences with zero redundancy; front-loaded with action, followed by prerequisite and side effect.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for basic invocation but lacks workflow context (relationship to cierre_evaluar_org) and error state details for a destructive/freezing operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage and the description fails to explain parameter semantics, formats, or relationships (e.g., what 'periodo' format is expected).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb ('Close') and resource ('organizational period') clearly stated; distinguishes from sibling client-level cierre_* tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states hard prerequisite (ALL active clients with historialCompleto=true must be closed first) and implies this is a finalizing action.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cierre_crear_clienteAInspect
Create a client monthly closing (immutable financial snapshot). Requires historialCompleto=true on the client. One closing per client per period.
| Name | Required | Description | Default |
|---|---|---|---|
| notas | No | ||
| apiKey | No | ||
| orgSlug | Yes | ||
| periodo | Yes | ||
| clientId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses immutability and uniqueness constraints (one per client per period) which are critical behavioral traits; no annotations exist to contradict
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three high-value sentences with zero fluff; front-loaded with the action verb and critical immutability property
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers business logic well but lacks return value description (no output schema exists) and detailed parameter semantics needed given empty schema
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema coverage, description only partially compensates by implying periodo defines the time period; fails to explain orgSlug, notas, or expected periodo format
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Explicitly states it creates a client monthly closing (specific verb+resource), distinguishes from org-level siblings (cierre_cerrar_org) and other operations via 'immutable financial snapshot'
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states prerequisite (historialCompleto=true) and constraint (one per period), but doesn't mention cierre_preview_cliente as alternative for checking existing closings
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cierre_distribuir_utilidadesAInspect
Distribute profits for a closed period. Freezes the current period and all prior open periods. Requires the period to be organizationally closed first. Requires confirm: true.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| confirm | Yes | ||
| orgSlug | Yes | ||
| periodo | Yes | ||
| montoDistribuido | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses critical side effect (freezes current and prior periods) and confirmation requirement; no annotations exist to contradict.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, all high-value: purpose, side effects, prerequisites, and parameter constraint; no fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a financial operation with side effects; covers freezing behavior and confirmation needs, though could note success/failure output given no output schema exists.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Explains the 'confirm' parameter semantics given 0% schema coverage, but leaves orgSlug, periodo, and montoDistribuido unexplained despite lack of schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb (distribute) and resource (profits), clearly distinguishes from sibling cierre tools (e.g., vs listar_utilidades which only lists, vs cerrar_org which closes).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states prerequisites (period must be organizationally closed first) and required confirmation, though could explicitly reference prerequisite tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cierre_eliminar_clienteAInspect
Delete (reopen) a client closing. Only allowed if the organizational period is not frozen. Requires confirm: true.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| confirm | Yes | ||
| orgSlug | Yes | ||
| cierreId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses business logic constraints (frozen period restriction) and confirmation requirement not visible in schema, though could clarify what 'reopen' means for the client.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise with three high-value sentences; action is front-loaded with constraints following logically.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately covers key constraints for this deletion operation despite lacking parameter descriptions; sufficient for correct invocation given standard ID patterns.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Adds critical semantic for 'confirm' parameter (must be true) but fails to compensate for 0% schema coverage for orgSlug and cierreId identifiers.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action (delete/reopen) and resource (client closing), though 'Delete (reopen)' phrasing is slightly ambiguous; distinguishes from sibling cierre tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states critical constraint (only if period not frozen) and confirmation requirement, effectively guiding when tool is invocable.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cierre_evaluar_orgBInspect
Evaluate organizational closing readiness for a period. Returns: active clients, closed count, excluded count, pending count, completion percentage, and whether closing is possible.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| orgSlug | Yes | ||
| periodo | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Since no annotations exist, description adequately discloses return values (counts, percentage, possibility flag) but omits side effects, idempotency, or auth requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with action verb, two concise sentences covering purpose and returns, no redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers return values (necessary given no output schema) but lacks domain context about what 'closing' entails and its position in the closing workflow.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage and description fails to compensate by not explaining 'periodo' format, 'orgSlug' semantics, or optional 'apiKey' usage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it evaluates organizational closing readiness for a period and distinguishes from sibling 'cierre_cerrar_org' by focusing on evaluation rather than execution.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this evaluation tool versus the actual closing tool (cierre_cerrar_org) or other cierre module siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cierre_listar_clientesCInspect
List client closings for an organization. Filter by period and/or client.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| orgSlug | Yes | ||
| periodo | No | ||
| clientId | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; description fails to disclose return structure, pagination behavior, side effects, or what constitutes a 'closing' record.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely brief (13 words) and front-loaded, though arguably too terse given the complexity of the cierre module.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 4 parameters, complex sibling ecosystem (8 cierre_* tools), and no output schema, description inadequately explains the domain concept of 'cierre' (settlement/closing).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema coverage, description partially compensates by mapping 'period' and 'client' to filtering intent, but omits required orgSlug semantics and apiKey purpose.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States 'List client closings for an organization' but 'closings' is ambiguous (financial settlement? account closure?) and fails to distinguish from sibling cierre_listar_utilidades or client_list.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Mentions filtering capability ('Filter by period and/or client') but provides no guidance on when to use this versus cierre_preview_cliente or cierre_evaluar_org.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cierre_listar_utilidadesBInspect
List retained earnings (utilidades retenidas) for an organization. Returns per-period records with accumulated totals: ingresos, costos, utilidadNeta, distribuido, retenido.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| orgSlug | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Describes return structure (per-period records with specific fields) which is necessary given no output schema exists, but omits side effects, rate limits, or read-only nature.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with no redundancy; immediately states purpose then return values.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately covers return values given no output schema exists, though parameter documentation is missing for the simple 2-parameter input.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fails to explain parameter semantics beyond implicitly referencing 'organization' for orgSlug; apiKey is unexplained.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it lists retained earnings with specific financial return fields (ingresos, costos, etc.), distinguishing it from sibling client-management and distribution tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this vs alternatives like cierre_distribuir_utilidades or cierre_evaluar_org; only states what the tool does.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cierre_preview_clienteAInspect
Preview the financial snapshot for a client in a period WITHOUT creating the closing. Returns totals for ventas, cobros, pagos, sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| orgSlug | Yes | ||
| periodo | Yes | ||
| clientId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Since no annotations exist, description adequately conveys it's safe (preview only) and lists specific return values (ventas, cobros, pagos, sessions) compensating for missing output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences, front-loaded with action and critical distinction; no fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers return values adequately given lack of output schema and explains domain-specific terms (ventas, cobros, pagos, sessions), though parameter documentation is missing.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, description fails to compensate by explaining parameter formats (e.g., periodo format, clientId format) or semantics beyond property names.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific action (preview financial snapshot) and distinguishes from siblings via 'WITHOUT creating the closing', but 'financial snapshot' could be more specific about being a pre-closing report.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies read-only use case vs committing tools via 'WITHOUT creating', but lacks explicit workflow guidance (e.g., 'use before cierre_crear_cliente').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
client_createAInspect
Create a new client in the organization. If a Person with the same email exists, it will be linked (not duplicated).
| Name | Required | Description | Default |
|---|---|---|---|
| rut | No | ||
| name | Yes | ||
| No | |||
| notes | No | ||
| phone | No | ||
| apiKey | No | ||
| comuna | No | ||
| orgSlug | Yes | ||
| lastName | Yes | ||
| direccion | No | ||
| idempotencyKey | No | ||
| proveedorTitularId | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Adds critical behavioral context: email-based deduplication logic ('linked, not duplicated') and implies idempotent-like behavior. Missing error handling and return value details, but covers the key business rule.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. First sentence establishes core purpose; second sentence delivers critical deduplication behavior. Perfectly front-loaded and appropriately sized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 12 undocumented parameters and no output schema, the description is incomplete. It covers the creation logic and deduplication rule but omits required parameter documentation, return value structure, and error scenarios expected for a complex 12-parameter mutation tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage with 12 parameters. Description mentions 'email' in the deduplication context but fails to document the 3 required parameters (orgSlug, name, lastName) or explain the other 9 optional fields (rut, apiKey, idempotencyKey, etc.). Insufficient compensation for schema deficiency.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description states specific verb (Create), resource (client), and scope (in the organization). Clearly distinguishes from siblings like client_get, client_list, and client_update through the explicit creation intent.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage context through the email deduplication behavior, indicating when duplicates are avoided. However, lacks explicit guidance on when to use client_update vs client_create, or prerequisites like required fields.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
client_getCInspect
Get complete details of a client including financial summary and recent sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| orgSlug | Yes | ||
| clientId | Yes | ||
| includeHistory | No | ||
| includeFinancials | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It partially succeeds by disclosing return value characteristics (financial summary, recent sessions) in lieu of an output schema. However, it fails to mention error behaviors (e.g., client not found), authorization requirements, or whether the operation is safe/idempotent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence with no redundant text. However, given the complete lack of schema documentation and annotations, it may be overly terse—leaving insufficient space to explain the five parameters or behavioral constraints.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with five parameters (two required), zero schema documentation, no annotations, and no output schema, the description is insufficient. It hints at return value structure but provides no guidance on required parameters (orgSlug, clientId) or how to construct a valid request, leaving significant gaps for agent operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate for five undocumented parameters. The description implicitly maps to includeFinancials and includeHistory by mentioning 'financial summary' and 'recent sessions,' but completely omits the required identifiers (orgSlug, clientId) and authentication (apiKey), leaving critical parameters unexplained.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (Get) and resource (client details), and distinguishes this from sibling tools like client_list (implied single vs multiple) and client_create/client_update (read vs write). It specifies unique content areas (financial summary, recent sessions) that hint at the scope of returned data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no explicit guidance on when to use this tool versus alternatives like client_list. While the naming convention (get vs list) implies single-record retrieval by identifier, there is no text explaining prerequisites (e.g., needing a clientId from client_list) or when to prefer this over other client tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
client_listBInspect
List clients of an organization with search and pagination. Can filter by provider or outstanding debt.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| apiKey | No | ||
| cursor | No | ||
| search | No | ||
| hasDebt | No | ||
| orgSlug | Yes | ||
| providerId | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions pagination behavior but fails to disclose critical traits: read-only vs. destructive nature, authentication requirements (apiKey parameter exists but is undocumented), rate limits, or response structure. This leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with no filler. The first establishes core functionality and the second specifies filter capabilities, making it appropriately front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 7 parameters, zero schema descriptions, no annotations, and no output schema, the description provides the minimum viable context for a list operation. However, it lacks necessary details on the required orgSlug, authentication via apiKey, and expected return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description partially compensates by mapping capabilities to parameters: 'search' (search), 'pagination' (limit/cursor), 'filter by provider' (providerId), and 'outstanding debt' (hasDebt). However, it fails to mention the required orgSlug or the apiKey parameter, leaving critical inputs undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (List) and resource (clients of an organization) and mentions key capabilities (search, pagination, filtering by provider/debt). However, it does not explicitly differentiate from sibling cierre_listar_clientes or clarify when to use this versus client_get for single client retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage scenarios (when you need a list with filters) but provides no explicit when-to-use guidance, prerequisites, or alternatives. It does not indicate when to prefer this over client_get (single retrieval) or cierre_listar_clientes.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
client_updateBInspect
Update an existing client's personal data. Email cannot be changed via MCP.
| Name | Required | Description | Default |
|---|---|---|---|
| rut | No | ||
| name | No | ||
| notes | No | ||
| phone | No | ||
| apiKey | No | ||
| comuna | No | ||
| orgSlug | Yes | ||
| clientId | Yes | ||
| lastName | No | ||
| direccion | No | ||
| proveedorTitularId | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Mentions email immutability constraint but fails to disclose mutation side effects, idempotency, authorization requirements, or what happens if clientId doesn't exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero fluff. First establishes purpose, second states key constraint. Every word earns its place and critical information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For an 11-parameter mutation tool with no output schema and zero schema descriptions, the description is inadequate. Lacks parameter definitions, return value structure, error scenarios, or partial vs full update semantics.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage. Description vaguely references 'personal data' (covering name, phone, etc.) but fails to explain 11 parameters including critical identifiers (orgSlug, clientId, proveedorTitularId) or apiKey purpose. Insufficient compensation for zero schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Update') + resource ('client's personal data') + scope ('existing'). Distinguishes effectively from sibling tools like client_create (new vs existing) and client_get (read vs write).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides one explicit constraint ('Email cannot be changed via MCP') which prevents incorrect usage. Implies usage for existing clients only, but lacks explicit when-to-use guidance versus client_create or error handling guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
comms_create_campaignAInspect
Create a new email campaign with HTML body to send to a segmented audience. Supports variable substitution: {nombre}, {apellido}, {nombre_completo}, {email}, {telefono}, {organizacion}. Use audienceType "predefined" with audienceId "active"/"inactive"/"new"/"with_whatsapp"/"without_whatsapp", or "adhoc" with custom filters. Returns campaign ID and recipient count. Campaign starts as draft — use comms_send_campaign to execute. Requires confirm: true.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | ||
| type | No | ||
| apiKey | No | ||
| confirm | Yes | ||
| orgSlug | Yes | ||
| emailBody | Yes | ||
| audienceId | No | ||
| adHocFilters | No | ||
| audienceType | Yes | ||
| emailSubject | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. It successfully covers: variable substitution placeholders (Spanish locale), return values (campaign ID and recipient count), draft state behavior, and confirmation requirement. Minor gap: doesn't explain apiKey authentication implications or adHocFilters structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three dense sentences with zero waste. Front-loaded with purpose, middle covers configuration specifics, end covers lifecycle and prerequisites. Every clause delivers unique information not found in structured fields.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Strong coverage for a 10-parameter tool with no output schema: description explains return values and complex audience selection logic. Minor deduction: description focuses on 'email' while the schema's 'type' parameter supports whatsapp/auto, suggesting slightly incomplete capability coverage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage, so description must compensate. It effectively documents audienceType/audienceId enums, confirm requirement, and emailBody format. However, it doesn't explain the 'type' parameter (which allows 'whatsapp'/'auto' per schema), apiKey usage, or adHocFilters nested structure, leaving some parameters undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description opens with specific verb 'Create' + resource 'email campaign' + key trait 'HTML body' and 'segmented audience'. Critically distinguishes from sibling tool comms_send_campaign by clarifying this creates a draft while the sibling executes it.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit workflow guidance: 'Campaign starts as draft — use comms_send_campaign to execute'. Also clarifies when to use 'predefined' vs 'adhoc' audience types with specific valid values for audienceId. States mandatory requirement 'confirm: true'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
comms_get_campaignBInspect
Get details of a specific campaign with optional delivery logs per recipient.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| orgSlug | Yes | ||
| logLimit | No | ||
| campaignId | Yes | ||
| includeLogs | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully indicates that delivery logs are available per recipient (a key data behavior), but fails to disclose safety properties (idempotency, read-only nature), rate limits, or the structure/format of the returned campaign details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single 12-word sentence is efficiently structured with the primary action front-loaded ('Get details'). However, given the lack of schema documentation and annotations, the description may be overly terse—trading necessary explanatory detail for brevity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a retrieval tool with 5 parameters and no output schema, the description minimally covers the main entity (campaign) and optional sub-resource (logs). However, significant gaps remain: no explanation of what constitutes a 'campaign' in this domain, no return value description, and no parameter documentation beyond the log feature.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Given 0% schema description coverage, the description inadequately compensates for undocumented parameters. It implicitly explains 'includeLogs' and 'logLimit' via 'optional delivery logs', but completely omits the two required parameters ('orgSlug', 'campaignId') and the 'apiKey' authentication parameter, leaving critical inputs undefined.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and resource ('details of a specific campaign'), clearly distinguishing it from sibling 'comms_list_campaigns' (single retrieval vs. listing) and 'comms_send_message' (read vs. write). It also identifies the secondary resource ('delivery logs per recipient').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the word 'optional' hints at when to request logs, the description provides no explicit guidance on when to use this tool versus 'comms_list_campaigns' (e.g., 'use this when you have a specific campaign ID'), nor does it mention prerequisites like authentication or required permissions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
comms_get_preferencesBInspect
Get the communication preferences for an organization (WhatsApp, email, confirmation, reminder channels and messages).
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| orgSlug | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully enumerates what data is returned (WhatsApp settings, email config, confirmation/reminder channels), which compensates partially for the missing output schema. However, it fails to disclose whether this is a safe read-only operation, potential error conditions, or rate limiting concerns.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, information-dense sentence that front-loads the action ('Get') and resource. The parenthetical list of preference types (WhatsApp, email, etc.) efficiently clarifies scope without verbosity. No wasted words or redundant phrases.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given this is a simple retrieval tool with 2 parameters and no output schema, the description adequately covers the conceptual output (preference fields). However, it falls short of completeness due to the complete absence of parameter documentation and lack of guidance on the tool's relationship to the broader communications workflow (e.g., that preferences affect 'comms_send_message' behavior).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% for both 'orgSlug' and 'apiKey' parameters. The description mentions 'for an organization', providing weak semantic context for 'orgSlug', but completely omits 'apiKey'. With zero schema documentation, the description inadequately compensates by failing to explain parameter formats, requirements, or the API key's purpose.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves communication preferences for an organization, specifying the exact resource and using a precise verb ('Get'). It distinguishes from siblings like 'comms_update_preferences' and 'comms_send_message' by focusing on retrieval of preference settings rather than mutation or messaging operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'comms_update_preferences' or 'comms_get_campaign'. It omits prerequisites (e.g., whether the organization must exist) and gives no hints about read-after-write patterns or caching considerations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
comms_list_campaignsCInspect
List communication campaigns (WhatsApp/email) for the organization. Filter by status.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| apiKey | No | ||
| cursor | No | ||
| status | No | ||
| orgSlug | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It fails to disclose read-only safety, pagination behavior (despite cursor/limit parameters existing), or return format. The agent cannot determine if this is a safe operation or how to handle multi-page results.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The two-sentence structure is front-loaded with the core purpose first, and every sentence conveys distinct information (scope and filtering capability). However, given the tool's complexity (5 parameters including pagination), the extreme brevity borders on under-specification.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 5 parameters, pagination support, and no output schema or annotations, the description is insufficient. It does not explain the pagination mechanism, required orgSlug context, or what data structure to expect in returns, leaving significant gaps in contextual understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description must compensate significantly. It only explicitly references 'status' filtering and implicitly 'organization' scoping. It completely omits explanation of pagination parameters (cursor, limit) and the apiKey authentication parameter, leaving critical functionality undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists communication campaigns and specifies the channels (WhatsApp/email). However, it does not explicitly distinguish from sibling `comms_get_campaign`, which likely retrieves a single campaign versus this tool's list functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions the ability to 'Filter by status,' providing some usage context. However, it lacks explicit guidance on when to use this versus `comms_get_campaign` or `comms_send_message`, and does not mention pagination requirements for large result sets.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
comms_render_messageAInspect
Render a communication template as a visual image (PNG). Available templates: session-confirmation, session-reminder, payment-reminder. Use action "preview" to get the image URL, "send" to render and send via WhatsApp with the image attached. Each template requires specific data fields (clientName, providerName, date, time, etc.).
| Name | Required | Description | Default |
|---|---|---|---|
| data | Yes | ||
| width | No | ||
| action | No | preview | |
| apiKey | No | ||
| orgSlug | Yes | ||
| clientId | No | ||
| template | Yes | ||
| whatsappTo | No | ||
| whatsappBody | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses output format (PNG), WhatsApp delivery for 'send' action, and template data requirements. However, it lacks critical behavioral details: return value structure (JSON with URL?), error handling for missing data fields, image storage duration, or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: sentence 1 establishes purpose and templates, sentence 2 explains action modes, sentence 3 covers data requirements. Information is front-loaded and density is optimal.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 9 parameters with nested objects and no output schema or annotations, the description covers the primary use case adequately but has notable gaps: the 'upload' action is unexplained, the return value format is unspecified, and required parameters like 'orgSlug' lack semantic context. Sufficient for basic operation but incomplete for robust agent usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, requiring description compensation. It successfully explains the 'template' enum values, 'action' values (partially), and the nested 'data' object structure (clientName, providerName, etc.). However, it leaves 6 of 9 parameters (orgSlug, apiKey, clientId, whatsappTo, whatsappBody, width) completely undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the core function ('Render a communication template as a visual image') with specific format (PNG) and lists available templates. However, it could better distinguish from sibling tools like 'comms_send_message' by explicitly contrasting image rendering vs. text messaging.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit guidance on when to use 'preview' (get image URL) vs 'send' (render and send via WhatsApp). However, it completely omits the 'upload' action present in the schema enum, leaving a gap in usage guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
comms_send_campaignAInspect
Execute a draft or scheduled campaign. Sends messages to all matching recipients asynchronously. Campaign must be in draft or scheduled status. Returns immediately — use comms_get_campaign to track progress. Requires confirm: true.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| confirm | Yes | ||
| orgSlug | Yes | ||
| campaignId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and discloses key behaviors: asynchronous processing ('asynchronously'), immediate return behavior ('Returns immediately'), state validation requirements ('draft or scheduled status'), and safety confirmation ('Requires confirm: true'). Minor gap regarding error handling or idempotency prevents a 5.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Five sentences, each earning its place: action definition, behavioral details, prerequisites, post-call flow, and safety requirement. No redundancy or filler. Front-loaded with the core action 'Execute a draft or scheduled campaign.'
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 0% schema coverage and no output schema, the description effectively covers critical gaps: explains the async flow, documents the tracking mechanism via sibling tool, and mandates the confirmation parameter. Lacks only error state documentation or response structure details to be fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% (apiKey, confirm, orgSlug, campaignId all undocumented in schema). Description compensates partially by explaining confirm must be true and implying campaignId refers to draft/scheduled campaigns, but orgSlug and apiKey remain unexplained. Adequate but incomplete compensation for the schema gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description explicitly states the tool 'Execute[s] a draft or scheduled campaign' and 'Sends messages to all matching recipients asynchronously' — specific verb (execute/send), specific resource (campaign), and distinguishes from siblings like comms_create_campaign (creates vs executes) and comms_send_message (individual vs campaign bulk).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit prerequisites ('Campaign must be in draft or scheduled status'), explicit post-call guidance ('Returns immediately — use comms_get_campaign to track progress'), and critical invocation requirement ('Requires confirm: true'). Clearly distinguishes when to use this versus comms_get_campaign for tracking.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
comms_send_messageAInspect
Send a single WhatsApp or email message to a specific client. Use templateKey for predefined templates or customMessage for free text. Requires confirm: true.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| channel | Yes | ||
| confirm | Yes | ||
| orgSlug | Yes | ||
| clientId | Yes | ||
| variables | No | ||
| templateKey | No | ||
| customMessage | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the confirmation safety requirement ('confirm: true') and the mutual exclusivity of template vs custom content. However, it omits critical behavioral details for a write operation: whether messages are sent immediately or queued, error handling (e.g., invalid phone numbers), rate limits, or if messages can be recalled.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: purpose first, parameter logic second, requirement third. Every sentence earns its place with no redundant or filler text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 8 parameters with 0% schema coverage and no output schema, the description provides minimum viable context for the primary use case (sending templated vs custom messages) but leaves addressing parameters (orgSlug, clientId) and the nested 'variables' object unexplained. For a high-stakes messaging operation, additional behavioral context (side effects, failure modes) is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate. It explains templateKey, customMessage (including the 2000 char maxLength implied by context), and confirm. However, it fails to explain orgSlug, clientId, apiKey, or crucially, 'variables' (which likely populates template placeholders), leaving significant gaps in parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Send'), channel ('WhatsApp or email'), scope ('single'), and target ('specific client'). The word 'single' effectively distinguishes this from sibling campaign tools (comms_list_campaigns, comms_get_campaign) which imply bulk operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear guidance on parameter selection ('Use templateKey for predefined templates or customMessage for free text') and prerequisites ('Requires confirm: true'). Lacks explicit contrast with campaign tools or guidance on when messaging is appropriate vs other communication methods.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
comms_update_preferencesAInspect
Enable or disable communication channels and features for an organization. Partial update — only provided fields are changed. Creates preferences if none exist.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| orgSlug | Yes | ||
| emailEnabled | No | ||
| phoneRequired | No | ||
| primaryChannel | No | ||
| whatsappEnabled | No | ||
| emailReminderEnabled | No | ||
| phoneRequiredMessage | No | ||
| whatsappReminderEnabled | No | ||
| whatsappReminderMessage | No | ||
| emailConfirmationEnabled | No | ||
| whatsappConfirmationEnabled | No | ||
| whatsappConfirmationMessage | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses two key behavioral traits: partial update semantics and upsert behavior (create if missing). However, it omits auth requirements (apiKey parameter exists but is unexplained), safety implications, or side effects of changing preferences.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: purpose statement front-loaded, followed by partial update semantics, then upsert behavior. Every sentence earns its place with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a 13-parameter mutation tool with no output schema. Covers the critical behavioral contract (partial update, upsert) but remains thin on domain specifics (doesn't mention email/whatsapp specifically) and omits parameter documentation that the schema fails to provide.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate. It provides semantic grouping ('communication channels', 'features') that maps to the boolean toggles and message fields, but fails to document the required orgSlug identifier, apiKey authentication, or specific enum values for primaryChannel.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool enables/disables 'communication channels and features' for an organization, using specific verbs. Distinguishes itself from sibling comms_get_preferences by specifying 'update' behavior and from comms_send_message by focusing on configuration rather than message transmission.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implicit usage guidance through the 'Partial update' explanation (only provided fields changed) and upsert behavior ('Creates preferences if none exist'). However, lacks explicit when-to-use guidance or comparison to comms_get_preferences for read scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
disputes_listBInspect
List disputes for an organization. Filter by status or type. Returns disputes with client and provider info.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | ||
| type | No | ||
| limit | No | ||
| apiKey | No | ||
| status | No | ||
| orgSlug | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It partially compensates by noting the return structure ('Returns disputes with client and provider info'), but fails to disclose safety characteristics, pagination behavior, error modes, or that the apiKey parameter implies authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The three-sentence structure is optimally front-loaded: purpose (sentence 1), filtering constraints (sentence 2), and return value preview (sentence 3). Every sentence earns its place with zero redundancy or wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 6 parameters with 0% schema coverage and no output schema, the description provides minimal viable context by explaining the core entity and return structure, but remains incomplete regarding pagination semantics (page/limit) and authentication. Adequate but with clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, requiring the description to compensate. It adds semantic meaning for 'status' and 'type' via the filtering mention and implies 'orgSlug' through 'organization,' but leaves 'page', 'limit', and 'apiKey' completely undocumented. This is partial compensation for the schema gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'List[s] disputes for an organization' with specific verb, resource, and scope. While the term 'disputes' distinguishes it from sibling tools like finance_list_cobros or client_list, it could strengthen differentiation by clarifying if these are payment disputes vs other types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions filtering capabilities ('Filter by status or type') but provides no explicit guidance on when to use this tool versus finance_list_payments or other financial listing tools. No prerequisites, exclusion criteria, or alternatives are documented.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dunning_configureBInspect
Update dunning (payment recovery) configuration for an organization. All fields except organizationSlug are optional — only provided fields are updated, rest stays unchanged.
| Name | Required | Description | Default |
|---|---|---|---|
| steps | No | ||
| enabled | No | ||
| blockOnStep5 | No | ||
| gracePeriodDays | No | ||
| organizationSlug | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It successfully discloses the partial-update behavior (unmentioned fields remain unchanged), which is essential for a configuration update. However, it misses other critical behavioral details: side effects on active overdue accounts, validation constraints (e.g., step ordering), and idempotency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, both essential. First establishes purpose; second gives critical partial-update instruction. No redundancy, well front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a configuration tool with 5 parameters (including a nested array of objects with enums and constraints) and 0% schema coverage, the description is insufficient. It does not explain the domain model (what constitutes a valid dunning step sequence), valid enum semantics for 'channel', or the interaction between 'steps' and 'blockOnStep5'.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, requiring heavy description compensation. While it notes that 'organizationSlug' is the only required field, it fails to explain the complex domain semantics of 'steps' (dunning escalation workflow), 'blockOnStep5' (what gets blocked?), or 'gracePeriodDays' context, leaving the agent to guess the business logic from property names alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb 'Update' and resource 'dunning (payment recovery) configuration' with helpful parenthetical definition. However, it does not explicitly distinguish from sibling 'dunning_get_config' (read vs write), though the verb difference is somewhat implicit.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides crucial usage guidance that this follows partial-update/PATCH semantics ('only provided fields are updated'), explaining how to populate parameters. However, lacks explicit guidance on WHEN to use this vs the sibling 'dunning_get_config' or prerequisites like permissions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dunning_get_configBInspect
Get the current dunning (payment recovery) configuration for an organization. Returns whether dunning is enabled, grace period, step timings, and blocking settings.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| organizationSlug | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description compensates by detailing return values: 'whether dunning is enabled, grace period, step timings, and blocking settings'. Missing safety context (read-only assumed from 'Get' but not explicit) and rate limits. Adequate given no output schema exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first establishes purpose, second details return payload. Perfectly front-loaded and appropriately sized for tool complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Acceptable for a simple getter with 2 parameters. Return value description substitutes for missing output schema, but 0% parameter coverage and lack of usage context leave gaps that should be filled.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0% (no parameter descriptions in JSON schema). Description mentions 'organization' implying organizationSlug context, but completely omits apiKey parameter semantics. Insufficient compensation for undocumented schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent clarity: specific verb 'Get' + resource 'dunning (payment recovery) configuration' + scope 'for an organization'. The parenthetical explanation of dunning and explicit distinction from sibling 'dunning_configure' makes the purpose unmistakable.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Zero usage guidance provided. No mention of when to use this vs. sibling 'dunning_configure', prerequisites for the organizationSlug parameter, or typical workflows (e.g., check config before updating).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
email_domain_deleteAInspect
Remove the configured email sending domain from the organization. This deletes it from both Resend and the database. The organization will revert to using the default Coordinalo sending address. Requires confirm: true.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| confirm | Yes | ||
| orgSlug | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully reveals the deletion scope (both Resend and database), the fallback behavior (reversion to default sender), and the safety mechanism (requires confirm: true). Does not address reversibility or impact on queued emails.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four well-structured sentences with zero redundancy: action, implementation details, consequence, and requirement. Every sentence adds distinct value beyond the schema and title.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a destructive operation with no annotations or output schema, covering the essential behavioral impact and confirmation requirement. However, clear gaps remain in parameter documentation (apiKey, orgSlug) and lack of return value description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description partially compensates by documenting the critical `confirm` parameter ('Requires confirm: true'). However, it fails to explain `apiKey` entirely and only implicitly references `orgSlug` via 'organization', leaving gaps for a 3-parameter tool.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Remove', 'deletes') and identifies the exact resource (configured email sending domain). It distinguishes from siblings like email_domain_register and email_domain_verify by explicitly describing the removal action and organizational impact (reverting to default Coordinalo address).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implicit usage context through the consequence description (reverting to default address), but lacks explicit guidance on when to choose this over email_domain_verify or register, and does not specify prerequisites beyond the confirmation flag.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
email_domain_getAInspect
Get the email sending domain configured for an organization and its verification status (PENDING, VERIFIED, FAILED). Returns null if no domain is configured. Use email_domain_register to set one up.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| orgSlug | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description carries full disclosure burden. It successfully documents the null-return edge case and enumerates specific status values (PENDING, VERIFIED, FAILED). Lacks explicit read-only safety declaration, though implied by 'Get'.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first sentence delivers purpose, return structure, status enum values, and null-handling behavior; second sentence provides sibling tool reference. Information is front-loaded and every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, description adequately documents return behavior (domain object with status field or null) and enumerates possible status values. Sufficient for a simple getter tool, though could briefly describe the domain object structure beyond status.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage. Description partially compensates by mentioning 'organization' which maps conceptually to the required 'orgSlug' parameter, but provides no explicit parameter guidance, syntax details, or mention of the optional 'apiKey' parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Get' with clear resource 'email sending domain' and scope 'configured for an organization'. It distinguishes from sibling tools by explicitly referencing 'email_domain_register to set one up', clearly positioning this as retrieval-only versus setup.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit alternative tool reference ('Use email_domain_register to set one up') indicating when NOT to use this tool. Also clarifies edge case behavior with 'Returns null if no domain is configured', establishing expectations for empty states.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
email_domain_registerAInspect
Register a custom email sending domain for an organization via Resend. Returns DNS records that must be configured in the domain provider before verification. Replaces any previously configured domain. After adding DNS records, call email_domain_verify to check status.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| domain | Yes | ||
| orgSlug | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses critical destructive behavior (replaces previous domain), return value type (DNS records), and external dependency (must configure in domain provider). Missing: auth requirements, rate limits, or error scenarios.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with zero waste: purpose (sentence 1), return value/behavior (sentence 2), destructive warning (sentence 3), and next step instruction (sentence 4). Information is front-loaded and logically sequenced.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a 3-parameter tool with no output schema. Describes return values (DNS records), complete workflow (register → configure → verify), and side effects (replacement). Only gap is detailed parameter documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, requiring description compensation. Mentions 'domain' and 'organization' concepts mapping to parameters, but does not explicitly document the apiKey parameter or parameter formats/constraints. Partial compensation for the schema gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (Register), resource (custom email sending domain), scope (via Resend), and target (organization). It effectively distinguishes from siblings like email_domain_verify by positioning this as the initial registration step.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly references the sibling tool email_domain_verify for the next workflow step ('After adding DNS records, call email_domain_verify'). Warns about replacement behavior ('Replaces any previously configured domain'). Lacks explicit 'when not to use' exclusions, but provides clear workflow context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
email_domain_verifyAInspect
Trigger DNS verification for the configured email domain and return updated status. Call this after the organization has added the required DNS records. Status will be VERIFIED (ready to send), PENDING (DNS not yet propagated), or FAILED.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| orgSlug | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses three possible return states (VERIFIED, PENDING, FAILED) with semantic meanings ('ready to send', 'DNS not yet propagated'). Explains the asynchronous nature of DNS propagation. Missing idempotency details or retry guidance for PENDING state.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste. Front-loaded with action and return value, followed by prerequisite, then return state enumeration. Every sentence provides distinct value (action, timing, possible outcomes).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description essentializes return values by listing all three status states and their business meanings. Covers the DNS verification workflow context adequately. Could improve with retry behavior for PENDING or remediation steps for FAILED, but sufficient for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0% with no parameter descriptions. Description mentions 'organization' which loosely maps to orgSlug, and implies domain configuration context, but fails to document apiKey or explicitly define orgSlug as the organization identifier. Inadequate compensation for complete schema description absence.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action ('Trigger DNS verification'), target resource ('configured email domain'), and return value ('updated status'). Distinguishes from sibling tools like email_domain_register (setup) and email_domain_get (passive retrieval) by emphasizing the active verification trigger.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear temporal prerequisite: 'Call this after the organization has added the required DNS records.' Establishes workflow sequence. Lacks explicit naming of alternatives (e.g., when to use email_domain_get instead), but the 'trigger' verb implies active vs. passive usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
finance_agingAInspect
Get accounts receivable aging report: pending charges grouped by age buckets (0-7, 7-30, 30-90, 90+ days). Use to answer "who owes money" or "old debts" questions.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| apiKey | No | ||
| cursor | No | ||
| orgSlug | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses data grouping behavior (specific 0-7, 7-30, 30-90, 90+ buckets), but omits operational traits like read-only nature, rate limits, or auth requirements since no annotations exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two tightly written sentences with front-loaded purpose; every clause earns its place with zero redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately compensates for missing output schema by detailing the report structure (age buckets), though pagination mechanics remain unexplained.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage and description fails to compensate—no guidance on orgSlug, cursor pagination, or apiKey purpose.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb-resource pair ('Get accounts receivable aging report') and clearly distinguishes from sibling finance tools via 'age buckets' and 'old debts' focus.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear usage context ('who owes money', 'old debts'), though lacks explicit alternatives or exclusions (e.g., vs finance_client_balance).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
finance_client_balanceBInspect
Get the complete financial balance for a client: total sales, charges, payments, pending debt, and credits.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| orgSlug | Yes | ||
| clientId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations and no output schema, the description carries the burden of explaining what the tool returns. It successfully lists the balance components (sales, charges, payments, debt, credits), but omits information about error states, data freshness, or whether the operation is idempotent/read-only.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficiently structured sentence that front-loads the verb and resource, with the colon-separated list providing precise scoping without verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description partially compensates by describing the return structure. However, with 0% schema coverage on three parameters, the description fails to provide adequate context for the input requirements, leaving the agent to infer parameter semantics.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate. While it implies the 'clientId' parameter by mentioning 'for a client', it fails to document 'orgSlug' (critical for multi-tenant organization scoping) or 'apiKey' (authentication), leaving significant gaps in parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a 'complete financial balance' and enumerates specific components (sales, charges, payments, debt, credits), which distinguishes it from sibling list tools like finance_list_payments. However, it does not explicitly differentiate when to use this aggregated view versus querying specific transactions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like finance_aging or finance_list_payments, nor does it mention prerequisites such as requiring valid clientId and orgSlug identifiers.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
finance_create_cobroBInspect
Create a manual charge (cobro) for a client. Not linked to a sale/venta.
| Name | Required | Description | Default |
|---|---|---|---|
| tipo | No | ||
| fecha | No | ||
| monto | Yes | ||
| apiKey | No | ||
| orgSlug | Yes | ||
| clientId | Yes | ||
| descripcion | Yes | ||
| idempotencyKey | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden but discloses minimal behavioral traits. It does not explain whether creating a cobro triggers immediate billing, generates an invoice, utilizes the idempotencyKey, or what the mutation returns.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. The first sentence front-loads the core action; the second provides critical sibling differentiation. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a financial mutation tool with 8 parameters, zero schema documentation, no annotations, and no output schema, the description is insufficient. It lacks critical details on side effects, error conditions, return values, and comprehensive parameter guidance needed for safe invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate. While 'client' hints at clientId and 'charge' implies monto/descripcion, it fails to explain tipo, fecha, orgSlug, apiKey, or idempotencyKey semantics, leaving 5 of 8 parameters undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Create') and resource ('manual charge/cobro') and explicitly distinguishes from siblings by stating it is 'Not linked to a sale/venta', clearly differentiating it from finance_create_venta.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a negative constraint ('Not linked to a sale/venta') implying when not to use it, but fails to explicitly name the alternative tool (finance_create_venta) or provide positive guidance on specific use cases for manual charges.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
finance_create_ventaCInspect
Create a service sale (venta) for a client. Optionally auto-creates a charge (cobro) depending on org configuration. Requires confirm: true.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| estado | No | ||
| precio | Yes | ||
| confirm | Yes | ||
| orgSlug | Yes | ||
| cantidad | No | ||
| clientId | Yes | ||
| fechaRef | No | ||
| servicioId | Yes | ||
| proveedorId | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden but only reveals one side effect (auto-charge creation) and one requirement (confirm). It fails to explain auth requirements (apiKey), the semantic difference between the estado enum values, idempotency, or what 'org configuration' controls.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact with two efficient sentences and no redundancy. However, given the high parameter count and zero schema coverage, the extreme brevity becomes a liability rather than a virtue, forcing critical details into an overly dense format.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a financial mutation tool with 10 parameters, 0% schema coverage, no annotations, and no output schema, the description is insufficient. It lacks return value documentation, enum semantics, and parameter relationships necessary for safe invocation of a create operation in a finance domain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, requiring the description to compensate for 10 undocumented parameters. It explicitly mentions only 'confirm', while implying client and service context. It completely omits explanation of critical parameters like estado (realizado/proyectado), proveedorId, fechaRef, cantidad, and precio, leaving their semantics and formats undefined.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool creates a 'service sale (venta)' with specific verbs and resources. It distinguishes itself from sibling finance_create_cobro by noting that this tool 'auto-creates a charge (cobro)' conditionally, clarifying the relationship between the two entities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by noting the optional auto-creation of charges depending on org configuration, but lacks explicit guidance on when to use this versus finance_create_cobro or when to set estado to 'realizado' vs 'proyectado'. The confirm requirement is stated as a constraint rather than workflow guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
finance_get_cobroBInspect
Get details of a specific charge (cobro) including all associated payments.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| cobroId | Yes | ||
| orgSlug | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It adds value by disclosing that associated payments are included in the response (behavioral trait). However, it lacks explicit read-only declaration, error handling details, or rate limit warnings despite the safety-critical financial context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with no wasted words. Front-loaded with the action verb and immediately specifies the resource and included sub-resources.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 3-parameter tool with zero schema descriptions and no annotations, the description is insufficient. While it hints at return structure ('associated payments'), it fails to document critical parameters needed for invocation, leaving the agent to guess at orgSlug and apiKey purposes.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, so the description must compensate. It implicitly references 'cobroId' through 'specific charge,' but provides no semantics for 'orgSlug' (organization scoping) or 'apiKey' (authentication), leaving two of three parameters undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get details') and resource ('specific charge/cobro'), and distinguishes this from list operations by emphasizing 'specific.' It also clarifies scope by mentioning 'including all associated payments,' which distinguishes it from a basic charge lookup.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus siblings like `finance_list_cobros` (list vs. detail view) or `finance_list_payments` (direct payment lookup). No prerequisites mentioned despite required orgSlug/cobroId parameters.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
finance_list_cobrosBInspect
List charges (cobros) for an organization. Filter by client, status, or date range. Includes summary totals.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| apiKey | No | ||
| cursor | No | ||
| dateTo | No | ||
| estado | No | ||
| orgSlug | Yes | ||
| clientId | No | ||
| dateFrom | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully notes the presence of 'summary totals' in the response, which compensates partially for the missing output schema. However, it fails to describe pagination behavior despite cursor and limit parameters being present, and omits any mention of rate limits or permission requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely efficient at approximately 15 words across two sentences. Information is front-loaded with the core action first, followed by filtering capabilities and output characteristics. No redundancy or filler content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a list operation with 8 parameters and no output schema. Covers the primary filtering dimensions and mentions aggregate output features, but leaves significant gaps regarding pagination workflow, parameter validation rules, and the specific composition of the 'summary totals'.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Given 0% schema description coverage, the description partially compensates by mapping functional concepts (client, status, date range) to the parameter set. However, it leaves critical parameters unexplained: 'apiKey' (purpose/format), 'cursor' (pagination mechanism), and 'limit' (pagination size). It also fails to clarify valid values for 'estado' or date string formats.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb (List) and resource (charges/cobros), clarifying this is a bulk retrieval operation. The parenthetical translation of 'cobros' helps distinguish it from sibling tools like finance_list_payments or finance_list_ventas, though it doesn't explicitly articulate the business difference between these financial entities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Describes filtering capabilities (client, status, date range) but provides no explicit guidance on when to use this versus finance_get_cobro (single retrieval) or finance_list_payments. No prerequisites or exclusion criteria are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
finance_list_confirmationsAInspect
List pending charge confirmations and their status. Shows cobros in pending_confirmation state that await client verification. Filter by client or confirmation status (pending, confirmed, disputed, auto_confirmed).
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| status | No | ||
| orgSlug | Yes | ||
| clientId | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It clarifies the scope (pending_confirmation state, four possible status values) and implies read-only behavior via 'List', but omits mention of pagination, return format, or authorization requirements beyond the apiKey parameter.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three efficient sentences with zero redundancy: purpose declaration, domain scoping, and filter capabilities. Information is front-loaded and every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero schema coverage and no output schema, the description adequately covers the tool's domain purpose but remains incomplete regarding parameter specifics (especially orgSlug) and return behavior (absent output schema).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, requiring the description to fully compensate. The text explains the 'status' parameter (listing all four enum values) and implies 'clientId' ('Filter by client'), but fails to document the required 'orgSlug' or the 'apiKey' parameter, leaving critical inputs undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists 'pending charge confirmations and their status' (specific verb + resource). It adds domain context by specifying 'cobros in pending_confirmation state' and mentions the exact filterable states. However, it could better distinguish from sibling 'finance_list_cobros' by explicitly contrasting general listing vs confirmation-specific listing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage contexts ('Shows cobros...that await client verification'), suggesting when to use it, but lacks explicit guidance on when to prefer this over 'finance_list_cobros' or prerequisites like needing valid client/org identifiers.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
finance_list_paymentsCInspect
List payments received with filters. Includes summary by payment type.
| Name | Required | Description | Default |
|---|---|---|---|
| tipo | No | ||
| limit | No | ||
| apiKey | No | ||
| cursor | No | ||
| dateTo | No | ||
| orgSlug | Yes | ||
| clientId | No | ||
| dateFrom | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It adds value by noting the output 'includes summary by payment type,' but fails to indicate read-only safety, pagination behavior (despite cursor/limit params), or date range constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise at two sentences with the core action front-loaded. However, given the complexity (8 undocumented parameters), this brevity crosses into underspecification rather than efficient communication.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Inadequate for a financial tool with 8 parameters and zero schema documentation. While it mentions the payment type summary, it omits critical context: pagination mechanics, date range formats, required orgSlug, and the apiKey authentication pattern.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage across 8 parameters. The description mentions 'with filters' generically but fails to compensate by explaining specific filter semantics (dateFrom/dateTo range, clientId filtering, 'tipo' values) or pagination (cursor/limit). Only the 'tipo' parameter meaning is weakly implied via 'payment type.'
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool lists 'payments received' (specific verb + resource) and mentions the 'summary by payment type' feature. However, it fails to differentiate from siblings like 'finance_list_cobros' (collections) or 'finance_list_ventas' (sales), which could confuse the agent in this financial domain.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this versus the numerous sibling finance tools (e.g., finance_list_cobros, finance_register_payment, finance_aging). Does not mention the required 'orgSlug' parameter or prerequisites for filtering.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
finance_list_ventasCInspect
List sales (ventas) for an organization. Filter by client, service, provider, or status.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| apiKey | No | ||
| cursor | No | ||
| dateTo | No | ||
| estado | No | ||
| orgSlug | Yes | ||
| clientId | No | ||
| dateFrom | No | ||
| servicioId | No | ||
| proveedorId | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden but fails to disclose critical behavioral traits: it doesn't mention pagination (despite cursor/limit parameters), date range filtering (dateFrom/dateTo), or the return format. While 'List' implies read-only, it doesn't confirm safety or describe the 'estado' values.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with zero waste: the first establishes purpose and scope, the second lists key filtering capabilities. It is appropriately front-loaded and sized for quick comprehension.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 10 parameters with zero schema descriptions, no annotations, and no output schema, the description is insufficient. It omits pagination behavior, date range mechanics, authentication requirements (apiKey), and return structure, leaving significant gaps the AI must infer from parameter names alone.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate. It implicitly maps four filter concepts (client, service, provider, status) to parameters but leaves six parameters undocumented (limit, cursor, dateTo, dateFrom, apiKey, orgSlug) and provides no format guidance for filter values like 'estado'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'List sales (ventas) for an organization' with a specific verb and resource. The parenthetical '(ventas)' helps distinguish from sibling tools like finance_list_cobros, though it doesn't explicitly clarify the conceptual difference between sales records and collections.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions available filters ('Filter by client, service, provider, or status') but provides no guidance on when to use this tool versus similar finance_list_* siblings like finance_list_cobros or finance_list_payments, nor does it mention prerequisites like the required orgSlug.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
finance_register_paymentAInspect
Register a manual payment against an existing charge (cobro). Updates cobro status automatically. Requires confirm: true.
| Name | Required | Description | Default |
|---|---|---|---|
| tipo | Yes | ||
| fecha | No | ||
| monto | Yes | ||
| apiKey | No | ||
| cobroId | Yes | ||
| confirm | Yes | ||
| orgSlug | Yes | ||
| descripcion | No | ||
| idempotencyKey | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds valuable side-effect disclosure ('Updates cobro status automatically') and operational constraint (confirm requirement). However, with no annotations, it omits critical financial mutation details: idempotency behavior (despite idempotencyKey param), reversibility, and failure modes when cobroId doesn't exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste. Front-loaded with the core action, followed by side effects, then operational requirements. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Inadequate for a 9-parameter financial mutation tool with no annotations or output schema. Missing: parameter documentation (7/9 undocumented), return value description, error handling, and enum semantics for 'tipo' despite their business impact.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fails to compensate adequately. It explicitly explains only the 'confirm' parameter and implies 'cobroId'. Seven parameters (orgSlug, monto, tipo, fecha, apiKey, descripcion, idempotencyKey) remain completely undocumented with no semantic guidance.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific action: 'Register a manual payment against an existing charge (cobro)'. Verb (Register) + resource (manual payment) + scope (against existing charge) clearly distinguishes from siblings like finance_create_cobro (creates charges) and finance_list_payments (reads).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides one usage constraint ('Requires confirm: true') and implies context (manual payments), but lacks explicit when-to-use guidance versus alternatives like automated payment flows or when to use finance_create_cobro instead.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
finance_send_confirmationsAInspect
Send pending confirmation digest to clients. Groups all pending_confirmation charges by client and sends a single message per client via WhatsApp or email. Creates confirmation tokens and sets a grace period for auto-confirmation. Requires confirm: true.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| channel | No | ||
| confirm | Yes | ||
| orgSlug | Yes | ||
| clientId | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Excellently discloses side effects ('Creates confirmation tokens', 'sets a grace period'), aggregation logic ('Groups all... by client'), and delivery channels ('WhatsApp or email'). No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, zero waste. Front-loaded with main purpose, followed by mechanism, side effects, and safety requirement. Every sentence earns its place with dense operational information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 0% schema coverage and no output schema, the description comprehensively covers the business logic (batching, tokens, grace periods, auto-confirmation). Minor deduction for undocumented organizational parameters (orgSlug, clientId), though the core operational contract is fully described.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, requiring description to compensate. Successfully documents critical safety parameter ('Requires confirm: true') and enum values ('via WhatsApp or email' for channel parameter). However, fails to explain orgSlug (organization scoping), clientId (optional client filtering), or apiKey, leaving significant gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb ('Send'), clear resource ('pending confirmation digest'), and mechanism ('Groups all pending_confirmation charges by client... via WhatsApp or email'). Distinct from sibling finance_list_confirmations via the action verb 'send' versus 'list'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides critical prerequisite ('Requires confirm: true') and explains batch behavior ('single message per client'), implying when to use (bulk operations). Lacks explicit 'when not to use' or named alternatives, but the batching logic provides clear contextual guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
lifecycle_get_stateAInspect
Get the current lifecycle state of a session, including available transitions and state history. Returns current_state, available_transitions, verification_deadline (when state=delivered), and recent transition history with from/to/at/by/method fields. Requires X-Org-Api-Key.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| orgSlug | Yes | ||
| session_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing authentication requirements ('Requires X-Org-Api-Key') and detailed return structure. However, it doesn't mention potential errors, rate limits, or whether this is a read-only operation (though 'Get' implies it).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences efficiently cover the tool's purpose, return values, and authentication requirement. The first sentence could be slightly more front-loaded, but overall there's minimal waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read operation with no output schema, the description provides good detail about return values (current_state, available_transitions, etc.) and authentication needs. However, without annotations and with undocumented parameters, there are gaps in behavioral context and parameter understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter documentation. The description doesn't explain any of the three parameters (apiKey, orgSlug, session_id), leaving their purpose and format unspecified. Baseline 3 is appropriate as the description adds no parameter information beyond what the bare schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get') and resource ('current lifecycle state of a session'), specifying what information is retrieved. It distinguishes itself from sibling tools like 'lifecycle_transition' by focusing on state retrieval rather than state changes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when needing session state information, but provides no explicit guidance on when to use this tool versus alternatives like 'booking_get' or 'session-related tools'. No exclusions or prerequisites beyond the API key requirement are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
lifecycle_transitionAInspect
Execute a state transition on a session. Accepts either to_state (target state name per Servicialo spec: confirmed, in_progress, completed, delivered, verified, documented, cancelled, no_show) or action (semantic verb: confirm, start, complete, deliver, verify, document, cancel, no_show). When to_state=delivered, delivery_type is required. When to_state=no_show, no_show_type is required. Returns transition record with from, to, at, by, method fields. Requires X-Org-Api-Key.
| Name | Required | Description | Default |
|---|---|---|---|
| notes | No | ||
| action | No | ||
| apiKey | No | ||
| orgSlug | Yes | ||
| evidence | No | ||
| to_state | No | ||
| session_id | Yes | ||
| no_show_type | No | ||
| delivery_type | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it specifies the return format ('Returns transition record with from, to, at, by, method fields'), authentication requirements ('Requires X-Org-Api-Key'), and parameter dependencies. However, it doesn't mention potential side effects, error conditions, or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in three sentences: first states the core purpose, second explains parameter logic and dependencies, third covers returns and auth. Every sentence earns its place with critical information, no wasted words, and front-loaded essential details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex mutation tool with 9 parameters, 0% schema coverage, no annotations, and no output schema, the description does remarkably well by covering purpose, parameter logic, returns, and auth. However, it doesn't explain what happens during transitions (e.g., side effects, validation rules) or provide examples, leaving some gaps for a tool with this complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description compensates excellently by explaining the semantics of key parameters: it clarifies the relationship between to_state and action, lists valid values for both, and specifies required dependencies (delivery_type for delivered, no_show_type for no_show). This adds substantial meaning beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('execute a state transition on a session') and resource ('session'), distinguishing it from sibling tools like lifecycle_get_state (which reads state) and booking_update_status (which updates booking status rather than session lifecycle). It specifies the exact operation with precise terminology.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use specific parameters (to_state=delivered requires delivery_type, to_state=no_show requires no_show_type), but doesn't explicitly state when to use this tool versus alternatives like booking_update_status or when not to use it. It implies usage through parameter requirements rather than explicit guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
members_inviteBInspect
Invite a new member to the organization by email. Sends an invitation email. Requires confirm: true.
| Name | Required | Description | Default |
|---|---|---|---|
| role | Yes | ||
| Yes | |||
| apiKey | No | ||
| confirm | Yes | ||
| orgSlug | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully notes the side effect ('Sends an invitation email') and the confirmation requirement. However, it omits critical behavioral details such as whether the operation is idempotent, what happens if the email already exists in the system, or what the return value indicates.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of three short, front-loaded sentences with zero redundancy. Every sentence conveys essential information: the action, the side effect, and a critical requirement. It is an exemplar of efficient technical writing.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (5 parameters including a 3-value enum, 0% schema coverage, no annotations, and no output schema), the description is incomplete. It fails to explain the role taxonomy (admin vs. provider vs. assistant), the purpose of orgSlug, or the expected outcomes/success indicators, leaving significant gaps in the agent's understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description must compensate significantly but fails to do so adequately. While it references 'email' and 'confirm', it completely ignores three parameters: 'role' (including the enum values admin/provider/assistant), 'orgSlug', and 'apiKey'. Given that role and orgSlug are required fields with specific domain meanings, this is a significant gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (invite), resource (member), and method (by email). It effectively distinguishes from siblings like client_create or admin_create_organization by specifying 'member' and 'organization' context, though it does not explicitly differentiate from provider_create (which creates a provider profile vs. inviting a member with provider role).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a specific prerequisite ('Requires confirm: true'), which helps prevent invalid invocations. However, it lacks guidance on when to use this versus alternatives (e.g., when to invite a member vs. creating a client or provider directly) and does not mention prerequisites like admin permissions or existing organization requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
members_listBInspect
List members of an organization with their roles and status.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| orgSlug | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds valuable context about return data (roles and status) since no output schema exists, but fails to disclose other behavioral traits like pagination behavior, rate limits, or the read-only nature of the operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence that efficiently communicates the core purpose. However, given the lack of annotations and output schema, the extreme brevity contributes to informational gaps rather than effective conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple listing tool with two parameters, the description is minimally adequate. It partially compensates for the missing output schema by describing return fields (roles, status), but the complete absence of parameter documentation and usage context leaves significant gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description inadequately compensates for the undocumented parameters. While 'organization' implicitly references the required 'orgSlug' parameter, the 'apiKey' parameter is not addressed at all, leaving authentication requirements unexplained.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (List) and resource (members of an organization) and mentions specific data returned (roles and status). However, it does not explicitly differentiate from the sibling tool 'members_invite' (e.g., stating this is for viewing existing members vs. adding new ones).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'members_invite', nor does it mention prerequisites such as administrator permissions or when pagination might be needed.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nps_get_summaryBInspect
Get NPS summary for the organization: score, trend, promoter/passive/detractor counts. Use to answer questions about customer satisfaction.
| Name | Required | Description | Default |
|---|---|---|---|
| days | No | ||
| apiKey | No | ||
| orgSlug | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, yet description fails to disclose behavioral traits like read-only safety, idempotency, or rate limits beyond implying data retrieval via 'Get'.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences, front-loaded with action and return values; second sentence provides usage context without verbosity, though parameter details would improve structure.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately compensates for missing output schema by listing return values (score, trend, counts), but incomplete due to lack of parameter semantics and behavioral constraints given moderate complexity (3 parameters).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Critical gap: with 0% schema description coverage, the description omits explanations for all three parameters (orgSlug, days, apiKey), leaving agents unaware of what 'days' represents or valid orgSlug formats.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Get' with resource 'NPS summary' and clear differentiation from siblings like 'org_summary' via explicit mention of promoter/passive/detractor counts and NPS-specific metrics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implicit usage context ('Use to answer questions about customer satisfaction') but lacks explicit when-not-to-use guidance or comparison to alternatives like 'org_summary' or 'report_dashboard'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
org_summaryAInspect
Compact organization overview (~500 tokens). Returns services, providers, schedules, active features, key counts, and an onboarding_status checklist showing what is configured vs missing (services, providers, availability, public agenda). Use as first call to orient yourself — cheaper than report_dashboard. If onboarding_status.ready is false, follow the missing steps before booking.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| orgSlug | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses the ~500 token size constraint and cost efficiency relative to alternatives; no annotations exist to contradict these claims.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely compact at two sentences; front-loaded with the overview definition and immediately followed by usage guidance with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately covers return values (compensating for missing output schema) and cost context; only missing parameter explanations which prevents a perfect score.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fails to compensate by explaining what orgSlug represents or when apiKey is required, offering no parameter guidance.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it provides a 'Compact organization overview' and specifically lists returned data (services, providers, schedules, features, counts), while differentiating from report_dashboard.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly prescribes 'Use as first call to orient yourself' and provides clear cost comparison against sibling tool report_dashboard, guiding when to use this cheaper alternative.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
org_updateAInspect
Update organization profile fields: name, description, logo URL, or vertical. Only provided fields are updated.
| Name | Required | Description | Default |
|---|---|---|---|
| name | No | ||
| apiKey | No | ||
| logoUrl | No | ||
| orgSlug | Yes | ||
| vertical | No | ||
| description | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden. It successfully communicates partial-update behavior ('Only provided fields are updated'), but omits critical mutation details: null handling for nullable fields (logoUrl, vertical, description), return value structure, idempotency guarantees, or error conditions (e.g., invalid orgSlug).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero redundancy. Front-loads the field list and immediately follows with the critical partial-update constraint. Every word earns its place; length is appropriate for the complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 6 parameters, 0% schema coverage, and no output schema, the description provides minimum viable guidance for invocation but leaves significant gaps. Missing: identifier semantics (orgSlug), authentication parameter purpose (apiKey), and return value specification. Sufficient for basic usage but incomplete for a production mutation tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, requiring the description to compensate. It documents 4 of 6 parameters (name, description, logo URL, vertical) but fails to explain 'apiKey' (ambiguous purpose) or 'orgSlug' (the required target identifier). Adequate but incomplete compensation for the undocumented schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action (Update) and target resource (organization profile fields) with enumerated field examples. Distinguishes from sibling 'admin_create_organization' via 'Update' vs 'Create' semantics and from 'client_update'/'provider_update' by specifying 'organization', though could explicitly clarify this modifies existing records rather than creating them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'Only provided fields are updated' provides important behavioral guidance (PATCH semantics), implying the tool should be used for partial modifications. However, it lacks explicit when-to-use guidance versus alternatives like 'admin_create_organization' or prerequisites like requiring an existing orgSlug.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
payroll_get_summaryBInspect
Get payroll summary for a period: total per provider, total cost, pending approvals.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| orgSlug | Yes | ||
| periodo | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description carries full burden; it discloses return structure (behavioral trait) but omits auth context (apiKey purpose), rate limits, or read-only nature.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with action and return values; no fluff or redundant information, appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately covers return values (compensating for missing output schema) but insufficient given the lack of schema descriptions and annotations—organization context (orgSlug) remains unclear.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage; description only implicitly covers 'periodo' via 'for a period', leaving 'orgSlug' and 'apiKey' semantically unexplained despite the high compensation burden.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action (get payroll summary), target resource (period), and return contents (totals per provider, cost, pending approvals), though could better differentiate from sibling 'payroll_list_records'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this aggregate summary tool versus the sibling 'payroll_list_records' detail view, nor any exclusions or alternatives mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
payroll_list_recordsCInspect
List payroll records for an organization. Filter by period, provider, or status.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| apiKey | No | ||
| status | No | ||
| orgSlug | Yes | ||
| periodo | No | ||
| providerId | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Mentions filtering capability but omits pagination behavior (limit param exists), sorting, or return structure despite zero annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Brief and front-loaded, but arguably too terse given the complete lack of schema descriptions.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Insufficient for the complexity: 6 undocumented parameters, no output schema, and no explanation of filter values or pagination patterns.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Explains 3 of 6 parameters (periodo, providerId, status via 'period, provider, or status'), partially compensating for 0% schema coverage but leaving apiKey, limit, and orgSlug unexplained.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it lists payroll records and mentions filtering, but fails to differentiate from sibling 'payroll_get_summary'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this versus payroll_get_summary or other alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
provider_createCInspect
Create a new provider in the organization. Links or creates a Person record by email.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | ||
| Yes | |||
| phone | No | ||
| apiKey | No | ||
| orgSlug | Yes | ||
| lastName | Yes | ||
| isInternal | No | ||
| serviceIds | No | ||
| idempotencyKey | No | ||
| comunasCobertura | No | ||
| defaultCommission | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With zero annotations, the description must carry full behavioral burden. It discloses one important side effect (links/creates Person record), but fails to mention idempotency semantics (despite idempotencyKey param), conflict resolution, return value structure, or authorization requirements for this mutation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero filler. Every word earns its place—first establishes the core operation, second reveals critical side effect. Appropriately front-loaded for quick comprehension.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Inadequate for an 11-parameter creation operation with no output schema and no annotations. Missing explanation of complex parameters (commission structure, geographic coverage, service assignments) and lacks return value documentation. The Person record mention is insufficient compensation for the schema coverage gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage, requiring the description to compensate. While it mentions email's role in Person linking and implies name fields, it completely ignores 8 other parameters including critical domain concepts: serviceIds (service linkage), comunasCobertura (coverage areas), defaultCommission (financial terms), and isInternal (provider classification).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States clear verb ('Create') and resource ('provider'), and adds domain context ('in the organization'). Distinguishes from update/get siblings implicitly, though could clarify provider vs service distinction given sibling 'service_create'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use guidance or comparison with alternatives like 'provider_update' or 'service_assign_provider'. The Person record mention hints at email uniqueness requirements but doesn't specify when to use this vs linking existing providers.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
provider_getBInspect
Get complete details of a provider including services, schedule, and session stats.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| orgSlug | Yes | ||
| providerId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully indicates the read-only nature via 'Get' and previews the return payload contents (services, schedule, stats), but omits error handling behavior, authorization requirements, or rate limiting details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence of 11 words that is front-loaded with the action verb. No filler words or redundant phrases; every word serves to describe the tool's function or return value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple 3-parameter schema and lack of output schema, the description adequately compensates by enumerating the specific data returned (services, schedule, stats). However, the complete absence of parameter documentation prevents a higher score.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, yet the description fails to compensate by explaining any of the three parameters (orgSlug, providerId, apiKey). While 'provider' in the description implies the providerId parameter, orgSlug and apiKey remain completely undocumented with no semantic hints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action ('Get complete details') and resource ('provider'), and lists specific data returned ('services, schedule, and session stats'). However, it does not clarify the distinction from sibling tool 'provider_get_stats' despite both mentioning stats, creating potential ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus alternatives like 'admin_list_providers' (bulk listing) or 'provider_get_stats' (dedicated stats endpoint). No prerequisites, error conditions, or exclusion criteria are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
provider_get_statsBInspect
Get detailed performance metrics for a provider over a date range: sessions, occupancy, no-show rate, revenue.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| dateTo | No | ||
| orgSlug | Yes | ||
| dateFrom | No | ||
| providerId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully enumerates the specific metrics returned (sessions, occupancy, no-show rate, revenue), which is valuable behavioral context. However, it lacks safety indicators (read-only status), date format specifications, or rate limiting information that would be essential for a tool with zero annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence of 14 words with zero redundancy. It places the action verb first ('Get'), follows with the resource and scope, and efficiently lists the four specific metric categories as a colon-delimited set. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 5 parameters, 0% schema coverage, no annotations, and no output schema, the description is incomplete. While it conceptually describes the return values (the four metrics), it fails to document parameter semantics, date formats, or pagination behavior that would be necessary for correct invocation without additional schema documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Given 0% schema description coverage, the description partially compensates by implying semantic meaning for 4 of 5 parameters: 'date range' maps to dateFrom/dateTo, and 'for a provider' implies providerId and orgSlug context. However, it omits critical details like date string formats, the relationship between orgSlug and providerId, and the apiKey parameter, leaving significant gaps given the schema's lack of documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') with clear resource ('performance metrics for a provider') and scope ('over a date range'). It distinguishes from sibling 'provider_get' (which likely retrieves profile data) by specifying 'performance metrics' and enumerating specific metric types (sessions, occupancy, no-show rate, revenue).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'provider_get' (basic profile lookup) or the various 'report_*' siblings (report_revenue, report_occupancy, report_no_shows) that appear to cover similar metrics at different aggregation levels. No prerequisites or contextual triggers are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
provider_updateCInspect
Update provider data: status, commission, coverage areas, permissions.
| Name | Required | Description | Default |
|---|---|---|---|
| name | No | ||
| phone | No | ||
| apiKey | No | ||
| orgSlug | Yes | ||
| isActive | No | ||
| lastName | No | ||
| providerId | Yes | ||
| comunasCobertura | No | ||
| defaultCommission | No | ||
| canManageOwnServices | No | ||
| canManageOwnAvailability | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full responsibility for behavioral disclosure. It maps some parameters to concepts (status, commission, etc.) but fails to disclose critical mutation behaviors: update semantics (partial vs full), idempotency, error handling when providers don't exist, or side effects. This is inadequate for a write operation with 11 parameters.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero redundancy. It front-loads the action ('Update provider data') and follows with a colon-delimited list of modifiable fields. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For an 11-parameter mutation tool with no output schema and no annotations, the description is insufficiently complete. It lacks required parameter identification, return value description, error scenarios, and omits nearly half the updatable fields present in the schema. The minimalism that aids conciseness harms completeness here.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate. It conceptually maps 4 categories to approximately 6 parameters (isActive, defaultCommission, comunasCobertura, canManageOwnServices, canManageOwnAvailability), but omits 5 others including name, phone, apiKey, lastName, and critically, the required identifiers orgSlug and providerId. Partial compensation achieved.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool updates provider data and lists specific updatable aspects (status, commission, coverage areas, permissions). However, it does not explicitly differentiate from sibling tools like provider_create or provider_get within the description text itself, relying solely on the verb 'Update' to distinguish intent.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives (e.g., provider_create for new providers), no prerequisites (e.g., needing an existing providerId), and no exclusions or error conditions. It is purely a functional statement.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
public_availability_get_slotsAInspect
Query available time slots for public booking. Does NOT require an API key. Returns slots grouped by service from the organization's public agenda. Provider details are hidden — the system auto-assigns at booking time. Use after public_service_list to find bookable times.
| Name | Required | Description | Default |
|---|---|---|---|
| date | No | ||
| dateTo | No | ||
| orgSlug | Yes | ||
| dateFrom | No | ||
| timezone | No | ||
| serviceId | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Strong disclosure without annotations: reveals 'Does NOT require an API key' (auth), 'Provider details are hidden' (response filtering), and 'system auto-assigns at booking time' (business logic). Deducted one point for missing rate limits or pagination details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, zero waste. Front-loaded with purpose, followed by auth requirements, behavioral specifics, and workflow guidance. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Minimum viable. Strong on public API context (auth, auto-assignment, workflow) but inadequate for parameter documentation given 6 undocumented params and no output schema. Sufficient to invoke but requires guessing parameter formats.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Critical gap: With 0% schema description coverage across 6 parameters, the description fails to compensate. While 'organization's public agenda' hints at orgSlug and 'grouped by service' hints at serviceId, date formats, timezone standards, and date range logic are completely undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent clarity: 'Query available time slots for public booking' provides specific verb (Query), resource (time slots), and scope (public). Distinguishes from sibling 'availability_get_slots' by emphasizing the public/unauthenticated nature.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit workflow guidance: 'Use after public_service_list to find bookable times' provides clear sequencing. 'Does NOT require an API key' distinguishes this from admin/authenticated alternatives like 'availability_get_slots'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
public_booking_cancelAInspect
Cancel a public booking using the bookingToken. Only works for bookings in pending_confirmation, scheduled, or confirmed status. Optionally include a reason. Does NOT require an API key. The booking token scopes access to a single booking.
| Name | Required | Description | Default |
|---|---|---|---|
| reason | No | ||
| orgSlug | Yes | ||
| bookingToken | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, yet description discloses critical behavioral constraints: status preconditions, token scoping ('scopes access to a single booking'), and auth model. Could clarify side effects like notifications or permanence.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with zero waste: action+token, status constraint, auth model, security scope. Front-loaded with core verb and resource. Each sentence provides distinct operational guidance.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Robust coverage for a parameter-light mutation tool without annotations or output schema. Addresses public vs admin distinction (critical given sibling 'booking_cancel'), status workflow, and token-based security model.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Compensates effectively for 0% schema coverage by explaining bookingToken (primary access mechanism), reason (optional context), and implying org context via 'public booking'. orgSlug parameter not explicitly named but implicit in public endpoint pattern.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb (Cancel) + resource (public booking) + key mechanism (bookingToken). Distinguishes from sibling 'booking_cancel' by explicitly referencing public booking flow and token-based access.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit status constraints (pending_confirmation, scheduled, confirmed only) and clear authentication differentiation ('Does NOT require an API key' vs admin tools). Names the specific alternative access pattern.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
public_booking_confirmAInspect
Confirm a pending public booking using the confirmationToken returned by public_booking_create. Advances the booking from pending_confirmation to scheduled. The token expires after 30 minutes. Does NOT require an API key. Rate-limited.
| Name | Required | Description | Default |
|---|---|---|---|
| orgSlug | Yes | ||
| confirmationToken | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully documents: state transition (pending_confirmation → scheduled), temporal constraint (30-minute token expiration), authentication model (no API key required), and rate limiting. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Efficiently structured with zero waste. Front-loaded with the core action, followed by prerequisite, state change, expiration, auth requirements, and rate limit. Each sentence earns its place. Minor deduction for slightly abrupt final sentence fragment.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero annotations and zero schema description coverage, the description provides comprehensive behavioral context including auth, expiration, rate limits, and state transitions. Missing explicit orgSlug documentation and return value details (no output schema exists), but adequately complete for a confirmation operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate. It successfully explains the confirmationToken parameter (its origin from public_booking_create and its 30-minute expiration), but provides no semantic information for orgSlug. Partial compensation achieved.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (confirm), resource (pending public booking), and workflow context (advances from pending_confirmation to scheduled). It explicitly references the prerequisite tool public_booking_create, distinguishing it from other booking operations like create, cancel, or reschedule.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use (when you have a confirmationToken from public_booking_create) and the prerequisite workflow. It distinguishes this as a public endpoint (no API key required) versus admin booking tools, providing clear guidance on authentication requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
public_booking_createAInspect
Create a public booking request. Does NOT require an API key, but DOES require: (1) requester identity — fullName plus at least email or phone, (2) submission context — channel and whether an agent assisted, (3) authorization.humanIntentConfirmed must be true. The booking is created as pending_confirmation — use public_booking_confirm with the returned confirmationToken to confirm. A bookingToken is also returned for future lifecycle management (cancel, reschedule). Rate-limited per IP+org. All requests are audited with semantic decision codes. Use public_service_list → public_availability_get_slots → public_booking_create → public_booking_confirm as the complete public booking flow.
| Name | Required | Description | Default |
|---|---|---|---|
| notes | No | ||
| orgSlug | Yes | ||
| startAt | Yes | ||
| requester | Yes | ||
| serviceId | Yes | ||
| submission | Yes | ||
| authorization | Yes | ||
| idempotencyKey | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description fully discloses: authentication model (no API key, IP+org rate limiting), lifecycle state (created as pending_confirmation), return values (confirmationToken, bookingToken), audit behavior (semantic decision codes), and safety guardrails (humanIntentConfirmed requirement). Exceptional coverage of behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information-dense and logically structured from purpose → requirements → lifecycle → limits → workflow. Uses parenthetical numbering for clarity. Every sentence conveys essential constraints or workflow information appropriate for a complex 8-parameter tool with zero schema documentation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive coverage given complexity: documents the multi-step public booking flow, explains output tokens despite no output schema, and covers all validation requirements given the lack of schema descriptions. Complete for an unauthenticated, stateful booking creation tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Given 0% schema description coverage, the description compensates by documenting complex validation rules for three critical nested objects: requester (fullName + email/phone), submission (channel + agent assistance), and authorization (humanIntentConfirmed). However, it omits semantics for top-level parameters like serviceId, startAt, notes, and idempotencyKey.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with the specific verb and resource 'Create a public booking request' and immediately distinguishes it from siblings via 'Does NOT require an API key' (contrasting with booking_create) and positions it within the public booking workflow.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use guidance (no API key needed vs alternatives), prerequisites (requester identity, submission context), and the complete workflow chain: public_service_list → public_availability_get_slots → public_booking_create → public_booking_confirm. Also explicitly names public_booking_confirm as the next step.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
public_booking_getAInspect
Get details of a public booking using the bookingToken returned by public_booking_create. Returns status, scheduled time, service, and requester info. Does NOT require an API key — the booking token is the credential. Only returns public-safe data.
| Name | Required | Description | Default |
|---|---|---|---|
| orgSlug | Yes | ||
| bookingToken | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description carries full burden and succeeds well: discloses credential mechanism ('booking token is the credential'), data scope ('Only returns public-safe data'), and return payload preview ('status, scheduled time, service, and requester info').
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four precise sentences with zero waste: purpose + prerequisite, return payload, auth requirements, and data safety scope. Front-loaded with action and perfectly sized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, description adequately covers behavioral gaps (auth, data sensitivity) and explains return content. Minor gap remains regarding 'orgSlug' semantics and explicit sibling differentiation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, description must compensate. It excellently contextualizes 'bookingToken' (source, purpose, credential role) but provides zero semantic information for 'orgSlug', leaving one parameter completely undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Get details' with clear resource 'public booking'. Explicitly references sibling tool 'public_booking_create' as the token source, distinguishing this from the standard 'booking_get' which presumably requires API authentication.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Clearly indicates prerequisite (bookingToken from public_booking_create) and authentication model (no API key needed). Implicitly distinguishes from admin/internal 'booking_get' via emphasis on public credentials, though doesn't explicitly state when to prefer the admin alternative.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
public_booking_rescheduleAInspect
Reschedule a public booking using the bookingToken. Cancels the original and creates a new pending_confirmation booking at the new time. Returns new confirmationToken and bookingToken. Only works for bookings in pending_confirmation, scheduled, or confirmed status. Does NOT require an API key.
| Name | Required | Description | Default |
|---|---|---|---|
| orgSlug | Yes | ||
| newStartAt | Yes | ||
| bookingToken | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Excellently discloses: side effects (cancels original, creates new pending booking), return values (new confirmationToken and bookingToken), and operational constraint (no API key required). No contradictions with missing annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Five sentences, each earning its place: purpose+mechanism, return values, constraints, auth. Front-loaded with core action, zero repetition, appropriate density for a mutation tool with behavioral complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Excellent coverage given no output schema and no annotations: explains return tokens, workflow, status constraints, and auth. Minor gap on parameter formats (especially datetime format for newStartAt and orgSlug purpose) but adequate for tool selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0% (no parameter descriptions). Description mentions 'bookingToken' explicitly and 'new time' implies newStartAt, but 'orgSlug' is completely unmentioned. No format guidance for newStartAt (ISO8601?). Partially compensates for schema deficiency but doesn't fully document all 3 parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb (reschedule) + resource (public booking) + key mechanism (cancels original, creates new pending_confirmation). The 'public' prefix and 'bookingToken' usage clearly distinguish from admin sibling 'booking_reschedule' which likely uses internal IDs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states status prerequisites ('Only works for bookings in pending_confirmation, scheduled, or confirmed status') and auth context ('Does NOT require an API key'), implicitly guiding toward use in public/unauthenticated contexts vs admin alternatives. Could explicitly name the admin alternative 'booking_reschedule' for clarity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
public_service_listAInspect
List publicly bookable services for an organization. Does NOT require an API key. Returns only active, discoverable services with assigned providers. Use this as the first step in the public booking flow to show available services to end users or agents.
| Name | Required | Description | Default |
|---|---|---|---|
| orgSlug | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description carries full behavioral burden. Effectively discloses authentication (no API key), filtering logic ('only active, discoverable services with assigned providers'), and return scope. Lacks detail on error conditions or rate limits, but covers primary behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with zero waste: purpose statement, auth disclosure, return filtering, and usage guidance. Front-loaded with verb, logically ordered, every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Absence of output schema is compensated by describing return values ('active, discoverable services with assigned providers'). Single-parameter tool with clear public-booking context; only minor gap is lack of parameter format details given 0% schema coverage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage for the single 'orgSlug' parameter. Description mentions 'for an organization' which loosely maps the parameter semantics, but fails to compensate fully by describing format, example values, or constraints for this required parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('List'), resource ('publicly bookable services'), and scope ('for an organization'). Distinguishes from sibling 'service_list' via 'publicly' and explicit 'no API key' requirement, positioning it distinctly within the admin vs public tool ecosystem.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Use this as the first step in the public booking flow' providing clear workflow positioning. Critically notes 'Does NOT require an API key' which distinguishes usage from authenticated admin alternatives like service_list or admin_create_service.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
reminders_get_configAInspect
Get the full reminder/notification configuration for an organization. Returns detailed settings for each reminder type: bookingReminder (post-booking follow-up), sessionReminder24h (24h before), sessionReminder1h (1h before), paymentReminder (payment due), paymentOverdue (overdue payment), notificationFollowup (post-session NPS/follow-up), pendingConfirmation (auto-cancel unconfirmed). Each has enabled, timing, and frequency settings. More granular than settings_get reminders chapter.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| orgSlug | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and succeeds by enumerating all seven reminder types with clear parenthetical explanations of what each does, plus detailing the structure of returned settings (enabled, timing, frequency). Lacks only information on rate limits or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three well-structured sentences with zero waste: first establishes purpose, second details the comprehensive return payload including all reminder types, third provides sibling differentiation. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description excellently compensates by detailing the seven reminder types and their settings structures. However, with no annotations and undocumented parameters, it slightly misses completeness on the input side.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fails to compensate adequately. While 'for an organization' implicitly suggests orgSlug, neither parameter is explicitly documented, and apiKey authentication requirements are completely unmentioned despite being a required input.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Get') and resources ('full reminder/notification configuration') and explicitly differentiates from sibling tool settings_get by stating it is 'More granular than settings_get reminders chapter', clearly establishing its scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit guidance on when to use this tool versus settings_get by noting it is more granular for reminders specifically. However, it does not mention the relationship to reminders_update_config or explicitly state when NOT to use this read-only getter.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
reminders_update_configAInspect
Update reminder/notification configuration for an organization. Partial update — only provided sections are changed. Sections: bookingReminder {enabled, daysAfter, maxReminders, interval}, sessionReminder24h {enabled, hoursBefore, sendTime}, sessionReminder1h {enabled, hoursBefore}, paymentReminder {enabled, daysAfter, maxReminders, interval}, paymentOverdue {enabled, daysOverdue, maxReminders, interval}, notificationFollowup {enabled, daysAfter, maxFollowups, interval}, pendingConfirmation {enabled, timeoutHours, autoConfirm}. Returns the full configuration after update.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| orgSlug | Yes | ||
| paymentOverdue | No | ||
| bookingReminder | No | ||
| paymentReminder | No | ||
| sessionReminder1h | No | ||
| sessionReminder24h | No | ||
| pendingConfirmation | No | ||
| notificationFollowup | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully explains the partial update mutation style and states that it 'Returns the full configuration after update.' However, it omits important behavioral traits such as idempotency guarantees, error conditions for invalid orgSlug values, or whether changes take effect immediately versus requiring propagation time.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is information-dense and front-loaded with the primary purpose. The section enumeration is necessarily verbose given the schema's lack of descriptions, but remains scannable through consistent bracket notation. No sentences are wasted, though the density approaches the limits of readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex configuration tool with 9 parameters (7 being nested objects) and no output schema, the description adequately documents the input structure and discloses the return value behavior. It appropriately compensates for the rich schema complexity where the schema itself fails to provide descriptions, though it could benefit from noting error scenarios or validation constraints (e.g., maximum values for interval fields).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Given 0% schema description coverage across 9 complex parameters with deeply nested objects, the description provides essential compensation by exhaustively documenting every section and its sub-fields (e.g., 'bookingReminder {enabled, daysAfter, maxReminders, interval}'). Without this inline documentation, the agent would have no semantic understanding of the nested configuration objects.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with the specific action 'Update reminder/notification configuration for an organization,' clearly identifying the verb (update), resource (reminder/notification configuration), and scope (organization-level). It distinguishes from siblings like reminders_get_config, admin_toggle_discoverable, and settings_update by focusing specifically on notification timing and reminder logic.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides critical behavioral guidance with 'Partial update — only provided sections are changed,' clarifying the PATCH-like semantics. However, it lacks explicit guidance on when to use this versus alternatives like reminders_get_config (the likely read counterpart) or settings_update, and omits prerequisites such as required permissions or authentication context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
report_dashboardBInspect
Executive summary of the organization: today's sessions, monthly metrics, revenue, pending charges, and alerts.
| Name | Required | Description | Default |
|---|---|---|---|
| date | No | ||
| apiKey | No | ||
| orgSlug | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It partially succeeds by enumerating the data categories returned (sessions, metrics, revenue, etc.), which compensates for the missing output schema. However, it lacks operational details such as whether data is real-time or cached, error conditions, or required permissions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core concept ('Executive summary') and follows with specific examples of included metrics. Every word earns its place; there is no fluff or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description adequately compensates by detailing what data the report contains. However, the complete absence of parameter documentation (0% schema coverage) leaves significant gaps, preventing a higher score for a tool with three parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 0% description coverage for its three parameters (date, apiKey, orgSlug). The description mentions 'of the organization' which loosely implies the orgSlug requirement, but provides no information about the date parameter (format, default behavior) or the apiKey parameter, failing to compensate for the undocumented schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool as generating an executive summary and lists specific data points covered (today's sessions, monthly metrics, revenue, pending charges, alerts). This distinguishes it from narrower sibling tools like report_revenue or report_occupancy, though it doesn't explicitly differentiate from org_summary.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives such as 'org_summary' or the specific 'report_' siblings. It also fails to mention prerequisites like the required orgSlug or intended audience (executives vs. operational staff).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
report_deuda_realBInspect
Real-time report of clients with genuine outstanding debt. Excludes temporal payment mismatches (prepaid clients whose global balance is covered). Shows: client name, debt amount, periods with debt, last payment date, and collection status (active/inactive/never_paid). Use to answer "who actually owes money" questions.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| orgSlug | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'Real-time report' and the data fields shown, but does not disclose critical behavioral traits such as authentication needs (apiKey parameter), rate limits, pagination, or whether it's a read-only operation. For a reporting tool with no annotation coverage, this leaves significant gaps in understanding how the tool behaves.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: it starts with the core purpose, adds exclusions and details, and ends with usage guidance. Every sentence adds value without redundancy, making it efficient and well-structured for quick comprehension.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (reporting tool with 2 parameters, no output schema, and no annotations), the description is incomplete. It explains the report's content and purpose well, but omits parameter explanations, authentication details, and output format. Without annotations or output schema, the description should do more to cover these aspects for effective tool use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning the schema provides no descriptions for the two parameters (apiKey and orgSlug). The tool description does not mention or explain these parameters at all, failing to compensate for the lack of schema documentation. This leaves the agent guessing about parameter purposes and formats.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Real-time report of clients with genuine outstanding debt' with specific exclusions ('Excludes temporal payment mismatches') and details what it shows (client name, debt amount, etc.). It distinguishes itself from potential siblings by focusing on 'genuine' debt and answering 'who actually owes money' questions, making it highly specific and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: 'Use to answer "who actually owes money" questions.' It implies usage for genuine debt reporting, but does not explicitly state when not to use it or name alternatives among the many sibling tools (e.g., finance_aging, finance_client_balance). This gives good guidance but lacks explicit exclusions or comparisons.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
report_no_showsCInspect
Report no-show statistics for a period. Group by client, provider, service, or day.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| apiKey | No | ||
| dateTo | No | ||
| groupBy | No | ||
| orgSlug | Yes | ||
| dateFrom | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden but offers minimal behavioral context. It mentions grouping behavior and implies date-range filtering ('for a period'), but fails to disclose output format, pagination behavior (limit parameter exists), authentication requirements, or whether the operation is read-only.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences with no redundancy. The first states the core action and resource; the second lists the grouping options. Information is front-loaded and every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 6 parameters with 0% schema coverage, no annotations, and no output schema, the description is insufficient. It fails to document the required orgSlug identifier, date format expectations, or what data structure is returned, leaving critical gaps in the agent's understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate. While it implies date range parameters ('for a period') and lists groupBy options, it completely omits the required 'orgSlug' parameter and provides no context for 'apiKey' or 'limit'. This is a significant gap given the lack of schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool reports 'no-show statistics' and specifies the grouping dimensions (client, provider, service, day). It distinguishes from sibling report tools (report_revenue, report_occupancy) by specifying the 'no-show' domain, though it could clarify what constitutes a no-show.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description lists grouping options but provides no guidance on when to use this tool versus alternative report tools (report_dashboard, report_occupancy, report_revenue). No prerequisites, exclusions, or selection criteria are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
report_occupancyCInspect
Calculate provider occupancy rates for a period. Group by provider, day, or week.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| dateTo | No | ||
| groupBy | No | ||
| orgSlug | Yes | ||
| dateFrom | No | ||
| providerId | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden but discloses minimal behavioral traits. While 'Calculate' implies a read-only operation, it fails to explain what 'occupancy rate' means (percentage, decimal, utilization metric), output format, time range limits, or whether results include all providers or only active ones.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The two-sentence structure is front-loaded and contains no wasted words. However, given the complete lack of schema descriptions and annotations, the brevity contributes to under-specification rather than efficient communication.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Inadequate for a 6-parameter reporting tool with 0% schema coverage and no output schema. The description omits the return value structure (crucial without output_schema), required vs optional parameter distinctions, and date format expectations. It compensates for none of the metadata gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description mentions grouping dimensions (provider/day/week) which adds semantic value for the 'groupBy' parameter. However, with 0% schema description coverage across 6 parameters, it fails to explain the required 'orgSlug', date range parameters ('dateFrom', 'dateTo'), optional 'providerId' filter, or 'apiKey' authentication needs.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool calculates 'provider occupancy rates for a period' with specific grouping options (provider/day/week). However, it does not differentiate from similar sibling tools like 'provider_get_stats' or other report_* functions that might overlap in functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives like 'report_revenue', 'provider_get_stats', or 'report_dashboard'. No mention of prerequisites (e.g., valid orgSlug) or typical use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
report_revenueCInspect
Calculate revenue for a period grouped by day, week, month, service, or provider.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| dateTo | No | ||
| groupBy | No | ||
| orgSlug | Yes | ||
| dateFrom | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but fails to deliver. It doesn't indicate whether this is a safe read-only operation, what date formats are accepted, how large date ranges are handled, or what structure the returned revenue data takes.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise single sentence with no redundant words. The core action ('Calculate revenue') and key capability ('grouped by...') are front-loaded efficiently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Inadequate for a 5-parameter reporting tool with zero schema documentation and no output schema. The description omits return value structure, authentication requirements, time zone handling, and pagination behavior—critical gaps that force the agent to invoke the tool blindly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description effectively explains the groupBy parameter's enum values by listing the grouping options (day, week, month, service, provider), providing necessary context given 0% schema coverage. However, it completely omits semantics for dateFrom/dateTo formats, orgSlug identification, and the optional apiKey parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool calculates revenue with specific grouping dimensions (day, week, month, service, provider), distinguishing it from sibling reporting tools like report_occupancy or report_no_shows. However, it doesn't explicitly position it relative to finance_list_payments or clarify whether this returns aggregated totals or time-series data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives like finance_list_ventas or report_dashboard. Missing critical prerequisites such as how to obtain the required orgSlug parameter or date format expectations for the period filtering.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
report_sc_summaryCInspect
Breakdown of Servicio Coordinado (SC) events by month and resolver path (backfill, cac-native, live, compensalo). Use to validate SC coverage and monitor live SC resolution growth. Key metric: sc_live shows SCs resolved in production (not backfill).
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| orgSlug | Yes | ||
| periodo | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool provides a 'breakdown' and identifies a 'key metric' (sc_live), but doesn't describe the output format, whether it's read-only or mutative, any rate limits, authentication requirements beyond the apiKey parameter, or error conditions. For a reporting tool with zero annotation coverage, this leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately concise with three sentences that each add value: the first defines the breakdown, the second states the use case, and the third highlights a key metric. It's front-loaded with the core purpose and wastes no words. However, the third sentence could be integrated more smoothly into the flow.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a reporting tool with 3 parameters, 0% schema coverage, no annotations, and no output schema, the description is incomplete. It explains what the tool does at a high level but fails to document parameters, output format, behavioral constraints, or how it differs from other reporting siblings. The description doesn't compensate for the lack of structured data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter documentation. The description doesn't mention any parameters at all, leaving all three parameters (apiKey, orgSlug, periodo) completely undocumented. While the tool name and description imply organizational and temporal context, no specific guidance is given for parameter usage, format, or meaning.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Breakdown of Servicio Coordinado (SC) events by month and resolver path' and identifies the specific metrics involved (backfill, cac-native, live, compensalo). It distinguishes this from other reporting tools by focusing on SC coverage validation and live resolution growth monitoring. However, it doesn't explicitly differentiate from all sibling tools beyond the SC-specific focus.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implied usage context: 'Use to validate SC coverage and monitor live SC resolution growth.' This gives a general purpose but doesn't specify when to use this tool versus alternatives like 'report_dashboard' or 'report_occupancy,' nor does it mention prerequisites or exclusions. The guidance is functional but lacks explicit comparison with sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
scheduling_bookAInspect
Book a session (Servicialo spec). Returns confirmation_credential (opaque token, valid 30 min) and booking_id. Use scheduling_confirm with the credential to finalize. Does NOT require an API key — uses requester identity (fullName + email or phone). Accepts optional submission context for audit trail.
| Name | Required | Description | Default |
|---|---|---|---|
| notes | No | ||
| orgSlug | Yes | ||
| datetime | Yes | ||
| requester | Yes | ||
| service_id | Yes | ||
| submission | No | ||
| provider_id | No | ||
| idempotencyKey | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing: it returns a confirmation_credential (opaque token, valid 30 min) and booking_id, requires a follow-up step with scheduling_confirm, uses requester identity instead of API key, and accepts optional submission context. It doesn't mention error conditions or rate limits, but covers key behavioral aspects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste. First sentence states purpose and returns. Second explains the confirmation flow. Third covers authentication and audit trail. Every sentence adds critical information, and it's appropriately front-loaded with the core functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex tool with 8 parameters, nested objects, no annotations, and no output schema, the description does well by explaining the two-step booking flow, authentication approach, and key return values. It misses details about error handling, rate limits, and some parameter purposes, but provides substantial context given the complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It adds meaning for several parameters: explains that 'requester' uses 'fullName + email or phone', mentions 'submission context for audit trail' for the submission parameter, and implies idempotencyKey through the booking process. However, it doesn't cover all 8 parameters like orgSlug, service_id, datetime, notes, or provider_id.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Book') and resource ('a session'), specifies it's for 'Servicialo spec', and distinguishes it from sibling tools by mentioning the need to use 'scheduling_confirm' to finalize. It's specific and differentiates from alternatives like 'booking_create' or 'public_booking_create'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use this tool ('Book a session') and when to use an alternative ('Use scheduling_confirm with the credential to finalize'). It also provides prerequisites about authentication ('Does NOT require an API key — uses requester identity') and context for audit trail.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
scheduling_cancelBInspect
Cancel a session (Servicialo spec). Applies cancellation policy based on time remaining before scheduled time. Requires confirm: true and X-Org-Api-Key.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| reason | No | ||
| confirm | Yes | ||
| orgSlug | Yes | ||
| session_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it applies a cancellation policy based on time, requires a confirmation parameter, and needs an API key. However, it doesn't mention side effects like notifications or refunds.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded with the main action, followed by important behavioral details. Every sentence adds value, though it could be slightly more structured for clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations and no output schema, the description covers the core action and some behavioral aspects but lacks details on parameters, return values, and error handling. It's minimally adequate given the complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It only mentions 'confirm: true' and implies 'X-Org-Api-Key' (likely mapping to 'apiKey'), but doesn't explain 'orgSlug', 'session_id', or 'reason'. This leaves most parameters undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Cancel a session') and specifies it's for 'Servicialo spec', which provides some context. It distinguishes from sibling tools like 'booking_cancel' by specifying the system context, though not explicitly contrasting them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning 'Servicialo spec' and states prerequisites ('Requires confirm: true and X-Org-Api-Key'), but doesn't explicitly say when to use this versus alternatives like 'booking_cancel' or 'public_booking_cancel' among the siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
scheduling_confirmAInspect
Confirm a booking (Servicialo spec). Dual-mode: (1) with credential — uses the confirmation token from scheduling_book, no API key needed; (2) with booking_id — uses API key to confirm an existing session. Returns confirmed status with timestamp.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| orgSlug | Yes | ||
| booking_id | No | ||
| credential | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the dual-mode operation, authentication requirements (credential vs. API key), and what the tool returns (confirmed status with timestamp). It doesn't mention error conditions, rate limits, or side effects, but covers the essential operation well given the lack of annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in three sentences: purpose statement, dual-mode explanation, and return value. Every sentence adds critical information with zero waste, making it easy to parse and understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (dual-mode operation with 4 parameters, no annotations, no output schema), the description does a good job of completeness. It explains the operation modes, authentication, and return values. It could improve by detailing error cases or the exact format of the return value, but it provides sufficient context for basic usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter documentation. The description adds meaningful context for parameters: it explains that 'credential' comes from 'scheduling_book' and has a specific use case, and that 'booking_id' requires an API key. However, it doesn't explain 'orgSlug' (which is required) or 'apiKey' beyond the dual-mode mention, leaving some parameters inadequately documented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Confirm a booking (Servicialo spec).' It specifies the action (confirm) and resource (booking), and mentions it's for a specific system (Servicialo). However, it doesn't explicitly distinguish this from sibling tools like 'public_booking_confirm' or 'booking_update_status', which might have overlapping functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidelines: it describes two distinct modes (with credential vs. with booking_id), specifies prerequisites (confirmation token from scheduling_book or API key), and mentions when API key is needed vs. not. This clearly guides when to use each mode and what's required.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
scheduling_rescheduleBInspect
Reschedule a session to a new time (Servicialo spec). Cancels the original session and creates a new one at the specified datetime. Requires confirm: true and X-Org-Api-Key.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| confirm | Yes | ||
| orgSlug | Yes | ||
| session_id | Yes | ||
| new_datetime | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively explains key behaviors: it cancels the original session and creates a new one, and specifies authentication requirements ('X-Org-Api-Key') and a confirmation constraint ('confirm: true'). This covers mutation effects and prerequisites well, though it lacks details on error handling or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded, with two sentences that directly state the action and requirements. There's no unnecessary verbiage, and each sentence adds value, though the parenthetical '(Servicialo spec)' could be considered slightly extraneous.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a mutation tool with no annotations and no output schema, the description is moderately complete. It covers the core behavior and authentication needs but lacks details on error responses, return values, or specific formatting for parameters like 'new_datetime'. This leaves some operational gaps for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate for undocumented parameters. It mentions 'confirm: true' and 'X-Org-Api-Key' (implied as 'apiKey'), adding meaning for two of the five parameters. However, it doesn't explain 'orgSlug', 'session_id', or 'new_datetime', leaving significant gaps in parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Reschedule a session to a new time') and specifies the resource ('session'), making the purpose evident. However, it doesn't explicitly differentiate from sibling tools like 'booking_reschedule' or 'public_booking_reschedule', which appear to serve similar functions, preventing a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by stating 'Requires confirm: true and X-Org-Api-Key', which provides some context for prerequisites. However, it doesn't offer explicit guidance on when to use this tool versus alternatives like 'booking_reschedule' or under what specific conditions it should be preferred, leaving room for ambiguity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
service_assign_providerCInspect
Assign or unassign a provider to/from a service. Controls which providers can deliver which services.
| Name | Required | Description | Default |
|---|---|---|---|
| price | No | ||
| action | Yes | ||
| apiKey | No | ||
| confirm | No | ||
| orgSlug | Yes | ||
| serviceId | Yes | ||
| commission | No | ||
| providerId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden but only implies mutation through 'Assign or unassign'. It fails to disclose idempotency, whether unassigning is destructive, side effects on existing bookings, or error conditions (e.g., assigning an already assigned provider).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with zero redundancy. The first states the core operation and the second adds business context (delivery control), making it appropriately front-loaded and sized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (8 parameters including financial settings and confirmation flags), zero schema coverage, and lack of output schema or annotations, the description is insufficient. It fails to prepare the agent for optional pricing/commission parameters or explain what the operation returns.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate. While 'Assign or unassign' maps to the action parameter, the description completely omits the other 7 parameters including critical business logic fields like price, commission, and confirm, leaving their semantics entirely undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Assign', 'unassign', 'Controls') and identifies the resources (provider, service). It conceptually distinguishes this relationship-management tool from sibling CRUD tools like service_update or provider_create by emphasizing the assignment linkage and delivery control.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains what the tool does but provides no guidance on when to use it versus alternatives (e.g., when to unassign vs. disabling a provider), nor does it mention prerequisites like needing existing providers/services first.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
service_createAInspect
Create a new bookable service in an existing organization. Use this for day-to-day service management (requires X-Org-Api-Key). For initial org setup, prefer admin_create_service instead. After creating, use service_assign_provider to link providers. A service without providers cannot accept bookings.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | ||
| price | Yes | ||
| apiKey | No | ||
| orgSlug | Yes | ||
| category | No | ||
| currency | No | ||
| duration | Yes | ||
| modalidad | No | ||
| providerId | No | ||
| description | No | ||
| isDiscoverable | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It adds valuable behavioral context by noting the authentication requirement (requires X-Org-Api-Key) and the business logic constraint regarding providers. However, it omits mutation details like idempotency, error states, and return structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with zero waste. Front-loaded with purpose, followed by usage context with auth requirement, explicit alternative tool comparison, and workflow next-steps. Every sentence earns its place with high information density.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given high complexity (11 parameters, 4 required), zero schema coverage, no output schema, and no annotations, the description provides adequate operational context but has clear gaps. It successfully covers purpose and workflow, but the lack of parameter documentation makes it insufficient for standalone usage without schema inspection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% across 11 parameters, placing heavy burden on the description. While it vaguely references 'existing organization' (orgSlug) and 'X-Org-Api-Key' (apiKey), it fails to document the 4 required parameters (name, price, duration) or explain the enum values for 'modalidad', leaving significant semantic gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the specific verb (Create), resource (bookable service), and context (in an existing organization). It effectively distinguishes from the sibling tool admin_create_service by specifying this is for 'day-to-day service management' versus 'initial org setup'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use guidance ('day-to-day service management') and when-not-to-use with a named alternative ('prefer admin_create_service instead' for initial setup). It also includes critical workflow guidance about the next required step (use service_assign_provider) and a constraint (service without providers cannot accept bookings).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
service_listCInspect
List services of an organization. Can filter by active status, discoverability, or category.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| orgSlug | Yes | ||
| category | No | ||
| activeOnly | No | ||
| discoverableOnly | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It fails to indicate whether this is a read-only operation, if there are rate limits, what the return structure looks like (no output schema exists), or error handling behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with zero waste. The first sentence establishes the core purpose immediately, and the second adds filtering context. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero schema annotations, no output schema, and five parameters, the description is insufficient. It omits return value details (critical without output schema), ignores the required 'orgSlug' parameter, and lacks behavioral context expected for a complete API tool description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It explains the semantics of three filter parameters (active status, discoverability, category) but omits the required 'orgSlug' and the 'apiKey' parameter, leaving critical inputs undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'List services of an organization' with a specific verb and resource. However, it does not explicitly differentiate from similar list operations like 'admin_list_providers' or explain when to choose this over other service-related tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions filtering capabilities ('Can filter by...') but provides no explicit guidance on when to use this tool versus alternatives, prerequisites like authentication, or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
service_updateBInspect
Update an existing service (price, duration, status, etc.). Creates a price history entry if price changes.
| Name | Required | Description | Default |
|---|---|---|---|
| name | No | ||
| price | No | ||
| apiKey | No | ||
| orgSlug | Yes | ||
| category | No | ||
| currency | No | ||
| duration | No | ||
| isActive | No | ||
| modalidad | No | ||
| serviceId | Yes | ||
| description | No | ||
| isDiscoverable | No | ||
| publicDescription | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses the side effect of creating a price history entry when price changes. However, it lacks details on whether this is a partial or full update, idempotency, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero redundancy. The first states the core operation; the second adds critical side-effect information. Well-structured and appropriately brief.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex tool with 13 parameters (2 required), zero schema coverage, no annotations, and no output schema, the description is insufficient. It omits documentation for most parameters and provides no hint about the return value or success/failure indicators.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage, so the description must compensate. It mentions price and duration but fails to document the required identifiers (orgSlug, serviceId), the apiKey parameter, or explain the 'modalidad' enum values. It ambiguously references 'status' which maps unclearly to isActive/isDiscoverable booleans.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool updates an existing service and lists specific updatable fields (price, duration, status). It implicitly distinguishes from 'service_create' by specifying 'existing,' though 'etc.' leaves some ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this versus siblings like 'service_create' or 'service_assign_provider'. No mention of prerequisites (e.g., requiring serviceId from a previous list call) or when not to use the tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
settings_getAInspect
Get organization settings by chapter or specific keys. Chapters: basics (name, description, vertical, timezone, currency), availability (weekday hours, saturday, assisted assignment), communication (channels, phone required), finances (provider payment type, client payment timing, max balance), policies (no-show strikes, blocking duration, no-show charge, auto-apply), reminders (session 24h, booking, payment, confirmation timeout), client_data (required fields: lastName, rut, email, phone, direccion). Use chapter param for a group, or keys param for specific settings (comma-separated, e.g. "policies.noShowMaxStrikes,finances.clientPaymentTiming").
| Name | Required | Description | Default |
|---|---|---|---|
| keys | No | ||
| apiKey | No | ||
| chapter | No | ||
| orgSlug | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It excellently documents the data structure (mapping each chapter enum to its contained fields) but lacks operational details like error behavior, return format, or authentication requirements for the apiKey parameter.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information-dense but necessary given zero schema documentation. Front-loaded with purpose, followed by chapter mapping and usage syntax. The enumerated chapter contents, while lengthy, are essential for correct agent usage.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 0% schema coverage and no annotations, the description provides substantial value by detailing all enum options and key patterns. Minor gap: no mention of return structure (though no output schema exists to supplement this).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Compensates effectively for 0% schema coverage by exhaustively documenting the 'chapter' enum values (basics, availability, etc.) and their subfields, and explaining 'keys' format with examples. Only 'apiKey' parameter remains undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Get') + resource ('organization settings') + access method ('by chapter or specific keys'). The 'Get' operation clearly distinguishes this from sibling 'settings_update'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly guides parameter selection: 'Use chapter param for a group, or keys param for specific settings' with comma-separated format example. Implicit mutual exclusivity via 'or', though lacks explicit 'when not to use' warnings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
settings_updateAInspect
Update organization settings. Partial update — only provided keys are changed. Pass a settings object with key-value pairs (e.g. {"policies.noShowMaxStrikes": 3, "finances.clientPaymentTiming": "BEFORE"}). All values are validated before writing — if any key fails validation, no changes are applied. Returns the full updated settings for the affected chapters. See settings_get for available keys and valid values.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | ||
| orgSlug | Yes | ||
| settings | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden. It successfully explains the atomic validation behavior ('if any key fails validation, no changes are applied'), partial update mechanics, and return value format ('full updated settings for the affected chapters'). Lacks only permission/destructive warnings.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Six sentences with zero waste: purpose, partial-update semantics, input example, validation behavior, return value, and sibling reference. Complex information (nested object structure) is efficiently conveyed through inline JSON example rather than lengthy prose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema exists, the description appropriately explains return values ('full updated settings'). It adequately covers behavioral complexity (partial updates, validation) and sibling relationships. Minor gap in parameter documentation (orgSlug/apiKey) prevents a 5.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage. The description compensates effectively for the complex 'settings' parameter by providing concrete JSON examples and explaining key-value pair structure. However, it fails to document 'orgSlug' or 'apiKey' parameters, leaving two of three required inputs undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description opens with specific verb 'Update' and resource 'organization settings'. It clearly distinguishes from sibling 'settings_get' by referencing it as the tool to 'see available keys and valid values', establishing the read vs. write relationship.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly directs users to 'settings_get for available keys and valid values', providing clear when-to-use guidance. Also explains critical usage semantics: partial updates (only provided keys changed) and atomic validation (all-or-nothing application).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.