Bitrix24
Server Details
MCP server enabling AI agents to manage Bitrix24 features via standardized protocol
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
43 toolsadd_accomplicesCInspect
Adds accomplices to the task by their user IDs.
| Name | Required | Description | Default |
|---|---|---|---|
| taskId | Yes | Identifier of the task. Must be a positive integer. | |
| accompliceIds | Yes | Array of user identifiers to add as accomplices. Must not be an empty array. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description fails to disclose critical behavioral traits: it doesn't mention error conditions (invalid taskId, non-existent users), idempotency behavior (whether adding duplicate IDs errors or silently succeeds), or what the tool returns upon success. For a mutation tool, this is a significant gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that is front-loaded with the action. It contains no redundant or wasted words. However, given the lack of annotations and output schema, it borders on underspecified rather than appropriately concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a two-parameter tool with complete schema documentation, the description adequately covers the input semantics. However, lacking an output schema, the description should have explained the return value or success indicators, which it omits. It meets minimum viability but leaves operational gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, documenting both taskId and accompliceIds as positive integers. The description adds minimal semantic value beyond the schema, merely confirming that accompliceIds are 'user IDs' which the schema already states. With high schema coverage, this meets the baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (Adds), target resource (accomplices), and mechanism (by their user IDs). It distinguishes from sibling tools like 'add_auditors' and 'delete_accomplices' through the specific resource name and verb, though it doesn't clarify the functional difference between an accomplice and auditor.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus similar sibling tools like 'add_auditors', or when to use 'delete_accomplices' instead. It lacks prerequisites (e.g., whether the task must exist) or exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
add_auditorsCInspect
Adds auditors to the task by their user IDs.
| Name | Required | Description | Default |
|---|---|---|---|
| taskId | Yes | Identifier of the task. Must be a positive integer. | |
| auditorIds | Yes | Array of user identifiers to add as auditors. Must not be an empty array. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry full disclosure burden. It fails to mention whether the operation is idempotent (if adding existing auditors errors or succeeds silently), what permissions are required, or what constitutes success versus failure for a mutation operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single efficient sentence with no redundancy. However, given the complete lack of annotations and behavioral transparency, additional content would be warranted rather than brevity for its own sake.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple two-parameter mutation tool, the description identifies the core action but leaves gaps in behavioral context (no output schema explanation, no error handling, no idempotency guarantees) that should be covered given zero annotation coverage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already fully documents both parameters (taskId and auditorIds). The description essentially restates this information without adding syntactic details (e.g., integer constraints) or usage patterns beyond the schema, warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Adds') and resource ('auditors'), and mentions the mechanism ('by their user IDs'). It implicitly distinguishes from add_current_user_as_auditor by referencing arbitrary user IDs, but does not explicitly contrast with add_accomplices or delete_auditors siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description states what the tool does but provides no guidance on when to use it versus alternatives like add_current_user_as_auditor (for the current user only) or how to handle cases where auditors might already exist on the task.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
add_current_user_as_auditorBInspect
Adds current user as an auditor to the specified task.
| Name | Required | Description | Default |
|---|---|---|---|
| taskId | Yes | Identifier of the task. Must be a positive integer. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. While 'current user' implies authentication context, the description lacks critical behavioral details: reversibility (can the user remove themselves?), idempotency (what if already an auditor?), error conditions, or required permissions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence efficiently conveys the core operation without redundancy or wasted words. The key differentiator ('current user') is front-loaded and immediately clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter mutation tool without output schema, the description adequately states the primary operation but lacks completeness regarding side effects, return values, or failure modes. Sufficient for simple tools but minimal.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with taskId fully documented. The description references 'specified task' which aligns with the parameter, but adds no semantic value regarding the identifier format or constraints beyond what the schema already provides. Baseline 3 appropriate for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (Adds), the subject (current user), the role (auditor), and the target (specified task). It effectively distinguishes from sibling tool 'add_auditors' by specifying 'current user' versus adding other users.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus the sibling 'add_auditors' tool, or under what conditions (e.g., requiring existing task access). No mention of prerequisites or error conditions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
add_task_reminderBInspect
Adds or updates a reminder for a task, specifying whom and when to remind.
| Name | Required | Description | Default |
|---|---|---|---|
| taskId | Yes | Identifier of the task. Must be a positive integer. | |
| remindAt | Yes | When to send the reminder, in 'Y/m/d H:i' format (e.g., '2024/08/15 10:00'). | |
| remindBy | No | The principle for reminding. Defaults to 'date'. | |
| recipient | Yes | Task participant to send a reminder to. | |
| remindVia | No | How to send the reminder. Defaults to 'notification'. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It reveals important upsert semantics ('Adds or updates'), indicating idempotent-like behavior, but fails to clarify whether multiple reminders per task are allowed, what constitutes an 'update' versus a new entry, or side effects on existing notifications.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with no redundant words. It is front-loaded with the core action (adds/updates) and follows with the key differentiators (whom/when), demonstrating excellent information density.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 100% schema coverage and absence of an output schema, the description provides minimally sufficient context. However, for a tool with optional parameters and upsert behavior, it could better clarify the update semantics and relationship to the optional parameters (remindBy, remindVia).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description maps 'whom' to recipient and 'when' to remindAt conceptually, but adds no syntax details, format clarifications, or usage guidance beyond what the schema already provides for the five parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Adds or updates a reminder for a task' with specific verbs and resource identification. It distinguishes from siblings like create_task or update_task by focusing specifically on reminder functionality rather than task metadata.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no explicit guidance on when to use this tool versus alternatives, nor does it mention prerequisites (like task existence) or exclusions. The phrase 'specifying whom and when' implies usage context but lacks explicit recommendations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
add_task_resultAInspect
Adds a formal work summary or result to the task. This is typically used to document the final outcome of a task.
| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes | The result text. Must not be an empty string. | |
| taskId | Yes | Identifier of the task. Must be a positive integer. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, placing full burden on the description. While 'formal' hints at significance, the description discloses no behavioral traits: idempotency (can it be called twice?), side effects (does it close the task?), permanence (can it be edited/deleted?), or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences with zero redundancy. The first sentence defines the action; the second provides usage context. Information is front-loaded and proportionate to the tool's simplicity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 2-parameter mutation tool with no annotations, the description covers the business purpose adequately but omits operational criticals: whether results append or overwrite, if the task status changes automatically, or interaction with task closure workflows.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description adds valuable semantic context: 'formal work summary' and 'final outcome' guide the agent on appropriate content for the 'text' parameter beyond the schema's generic 'result text' description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Adds a formal work summary or result to the task' with specific verbs and resource. It implies a distinct concept (formal results) separate from general task updates, though it could explicitly differentiate from sibling 'update_task'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The second sentence provides clear context: 'typically used to document the final outcome of a task,' indicating when to invoke it (upon completion). However, it lacks explicit 'when not to use' guidance or named alternatives for mid-task updates.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
clear_task_deadlineCInspect
Clears the deadline for a task.
| Name | Required | Description | Default |
|---|---|---|---|
| taskId | No | Identifier of the task to update. Must be a positive integer. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full disclosure burden. It fails to clarify that this is a destructive operation (removes date data), whether the change is immediate, or if the task status changes when the deadline is removed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise single sentence with no wasted words. However, it borders on underspecification given the lack of annotations and output schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a single-parameter tool with full schema coverage, but could benefit from explaining what 'clear' means (null value? removal?) and confirming the task remains otherwise intact given the destructive nature of the operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the parameter is fully documented in the schema itself. The description adds no additional parameter semantics, but the high schema coverage means the baseline 3 score is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (clears) and resource (deadline for a task). It identifies the exact function well, though it doesn't explicitly differentiate from the generic 'update_task' sibling that could presumably also modify deadlines.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this specialized tool versus the general 'update_task' alternative. Does not mention prerequisites (e.g., task must exist) or side effects.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_check_listCInspect
Creates a new checklist inside a task.
| Name | Required | Description | Default |
|---|---|---|---|
| title | Yes | Title of the checklist. Must not be an empty string. | |
| taskId | Yes | Identifier of the task. Must be a positive integer. | |
| checkListItems | No | List of items to be added to the checklist. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the mutation ('Creates') but fails to disclose that items are optional (can be added later), what happens on duplicate titles, or what the return value contains. Minimal disclosure for a mutation operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single sentence is efficiently structured with the verb front-loaded ('Creates...'). Every word earns its place, though the extreme brevity is slightly problematic given the lack of annotations and output schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations, no output schema, and optional pre-population behavior (checkListItems), the description is incomplete. It should clarify that the checklist starts empty unless items are provided, or describe the relationship to item creation workflows.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all three parameters. The description adds minimal semantic value beyond the schema, though 'inside a task' reinforces the relationship between 'taskId' and the checklist being created. Baseline 3 appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Creates') and clear resource ('checklist') with contextual scope ('inside a task'). However, it does not distinguish from the sibling tool 'create_check_list_item', leaving ambiguity about whether this creates the container, the items, or both.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'create_check_list_item' or whether to populate 'checkListItems' now versus later. There are no prerequisites, conditions, or exclusion criteria mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_check_list_itemBInspect
Adds a new item to a checklist.
| Name | Required | Description | Default |
|---|---|---|---|
| title | Yes | Title of the checklist item. Must not be an empty string. | |
| checkListId | Yes | Identifier of the checklist. Must not be null. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. While 'Adds' implies a mutation, the description lacks crucial details: whether the operation is idempotent, what happens if the checkListId is invalid, required permissions, or what the tool returns upon success.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of a single, efficient sentence with zero wasted words. It front-loads the action and object, delivering the essential information immediately without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (two primitive parameters, no output schema), the description is minimally viable. However, it could be improved by noting prerequisite conditions (checklist existence) or basic error behaviors, especially given the complete absence of annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents both parameters (title requirements and checkListId constraints). The description adds no additional semantic context beyond the schema, but this is acceptable given the comprehensive schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses the specific verb 'Adds' with the resource 'item to a checklist', clearly indicating it appends entries to existing checklists. This implicitly distinguishes it from the sibling tool 'create_check_list' (which creates the container itself), though it does not explicitly differentiate from 'update_check_list_item'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, nor does it mention prerequisites (e.g., that the checklist identified by checkListId must already exist). There are no explicit exclusions or conditions mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_crm_custom_fieldCInspect
Creates a custom field for a CRM deal. Use this function when a new CRM field needs to be created
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Type of the custom field being created | |
| label | Yes | Name of the custom field to be created | |
| categoryId | Yes | Identifier of the CRM deal funnel, also referred to as 'category' | |
| isMultiple | Yes | Indicates if the custom field accepts multiple values. Always false for boolean fields. For enumeration fields, set to true if multiple options can be chosen (similar to a multi-select list); otherwise, set to false (standard single-select enumeration) | |
| enumerationList | No | List of selectable options for a custom enumeration-type field. Required if the field type is set to 'enumeration' |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. While 'Creates' implies a write operation, the description omits critical details: whether the operation is reversible (can fields be deleted?), if it affects existing deals, validation constraints, or required permissions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two short sentences, but the second sentence is tautological, restating the first without adding value. While brief, the redundancy prevents a higher score as it wastes the opportunity to provide behavioral or parametric guidance.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 5 parameters and conditional requirements (enumerationList required when type='enumeration'), the description is incomplete. It fails to surface the conditional parameter logic, explain error scenarios (e.g., duplicate field names), or describe the return value structure despite the absence of an output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The tool description adds no parameter-specific guidance beyond the schema, failing to highlight the critical conditional relationship between 'type' and 'enumerationList' (that options are required for enumeration types) or the 'isMultiple' constraints for boolean fields.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Creates a custom field for a CRM deal,' providing a specific verb and resource. It distinguishes from siblings like 'create_deal' (which creates records) by specifying 'custom field,' though it could more explicitly clarify that this creates schema/metadata rather than data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The second sentence ('Use this function when a new CRM field needs to be created') merely restates the purpose without providing actionable selection criteria. It fails to mention when NOT to use it (e.g., when updating existing fields) or how to choose between this and other creation tools like 'create_deal'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_dealBInspect
Creates a deal with the title specified by title in the funnel identified by categoryId. Use this function when explicitly instructed to create a new deal in a particular CRM funnel
| Name | Required | Description | Default |
|---|---|---|---|
| title | Yes | Title of the deal. Must not be an empty string | |
| categoryId | Yes | Identifier of the funnel in which the deal will be created |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full behavioral disclosure burden. It only states the creation action without explaining side effects, idempotency behavior, error scenarios (e.g., duplicate titles), or what the function returns (deal ID? success boolean?).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The two-sentence structure is efficient and front-loaded. The first sentence defines the operation and parameters; the second provides usage context. No redundant or filler text is present, though the parameter backticks could be considered slightly verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple 2-parameter schema with 100% coverage and no output schema, the description is minimally adequate. However, for a creation tool with no annotations, it omits important context like success indicators, relationship to the sibling 'deal_category_list' (for valid categoryIds), or whether the created deal receives default stage assignments.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (title description: 'Title of the deal...', categoryId: 'Identifier of the funnel...'), establishing a baseline of 3. The description references both parameters but adds minimal semantic meaning beyond the schema, essentially restating that categoryId identifies a funnel and title specifies the name.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (Creates), resource (deal), and container context (in the funnel identified by categoryId). It implicitly distinguishes from sibling 'create' tools like create_task and create_funnel_with_custom_stages by specifying the CRM funnel context, though it doesn't explicitly contrast with these alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes a 'when' clause ('when explicitly instructed to create a new deal'), providing basic usage guidance. However, it lacks 'when-not' guidance, prerequisites (e.g., funnel must exist), or references to alternative tools like move_deals_between_funnels for existing deals.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_default_funnelAInspect
Creates a new funnel with the specified name and sets the default stages. Use this function to add a standard funnel to the CRM without customizing the stages.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Name of the funnel to be created |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses that the tool 'sets the default stages' (key behavioral trait), but lacks details on return values, error conditions, idempotency, or what those default stages actually contain.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. First sentence states the action, second provides usage context. Information is front-loaded and every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a single-parameter creation tool with no output schema. Covers the creation action, default stage behavior, and usage context. Minor gap: does not specify what default stages are or what identifier is returned upon creation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with the 'name' parameter well-documented in the schema itself. Description references the parameter in backticks but adds no additional semantic context beyond the schema definition, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Creates' with resource 'funnel' and clearly distinguishes from sibling create_funnel_with_custom_stages by emphasizing 'default stages' and 'without customizing the stages', establishing clear scope boundaries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context for when to use ('add a standard funnel... without customizing the stages'), implicitly contrasting with create_funnel_with_custom_stages. Lacks explicit naming of the alternative tool, preventing a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_funnel_with_custom_stagesAInspect
Creates a new funnel with the specified name and custom 'In Progress' stages, while keeping the default stages unchanged. Use this to add a customized funnel to the CRM.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Name of the funnel to be created | |
| stages | Yes | Array of custom stages (cannot be empty) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description carries the burden. It successfully discloses the preservation of default stages alongside custom ones, but lacks mutation safety details (reversibility), return value structure, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences totaling 27 words. Front-loaded with specific behavior, followed by usage guidance. No redundancy with schema descriptions.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers the complex stage-creation behavior well, but incomplete regarding output (no output schema present) and missing error handling or validation details (e.g., duplicate name behavior). Adequate but has clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description adds valuable semantic context that 'stages' specifically refers to 'In Progress' stages (intermediate phases), distinguishing them from immutable final stages like Won/Lost.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Creates' + resource 'funnel' + clear customization scope 'custom In Progress stages, while keeping the default stages unchanged.' Effectively distinguishes from sibling create_default_funnel by emphasizing partial customization.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear usage context ('Use this to add a customized funnel') and implies when to use via the customization detail. Could be improved by explicitly contrasting with create_default_funnel for cases where standard stages suffice.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_new_funnel_stageCInspect
Creates a new stage with the specified fields for the entity with entity type identifier entityTypeId in the funnel identified by categoryId. Use this function when explicitly instructed to add a new custom funnel stage
| Name | Required | Description | Default |
|---|---|---|---|
| fields | Yes | Fields for the new CRM funnel stage | |
| categoryId | Yes | Identifier of the CRM entity funnel. Must be null if entityTypeId is 1 or 7. In all other cases, categoryId must not be null | |
| entityTypeId | Yes | CRM entity type identifier. Possible values: 1 (Lead), 2 (Deal), 7 (Proposal), 31 (Invoice), or an identifier of a smart-process type |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden, yet discloses almost no behavioral traits beyond the creation action. Fails to mention side effects (e.g., impact on existing deals), idempotency, validation behavior, or return values. Complex business constraints (P/S/F sorting semantics, entityTypeId/categoryId null rules) are relegated to schema only.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero redundancy. First sentence efficiently maps parameters to their roles; second provides usage trigger. Front-loaded with essential information. Slightly dense parameter list in first sentence prevents perfect score.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with complex business logic (sorting constraints, entity relationships) and no annotations/output schema, description is minimally adequate. Schema compensates for parameter documentation, but description lacks critical context about success outcomes, error modes, and automatic sorting behaviors that an agent needs to invoke safely.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with detailed nested descriptions (semantics enum, null constraints, sorting rules). Description merely names parameters without adding semantic context beyond schema. Baseline 3 appropriate given schema completeness, but no value added for interpreting complex nested fields object.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action (Creates) and resource (stage/custom funnel stage) with clear parameter references. Mentions the funnel context via categoryId, implying operation on existing funnel structure. However, lacks explicit differentiation from sibling create_funnel_with_custom_stages which creates both funnel and stages, versus this tool which adds to existing funnel.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides minimal 'when to use' guidance ('when explicitly instructed to add a new custom funnel stage') but offers no autonomous decision criteria, prerequisites, or explicit comparison to alternatives like create_funnel_with_custom_stages or update_funnel_stages. Does not clarify when NOT to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_taskCInspect
Creates a new task with the provided title and other details.
| Name | Required | Description | Default |
|---|---|---|---|
| title | Yes | Task title. Must be a non-empty string | |
| status | No | Task status | |
| groupId | No | Identifier of the group for the task. Must be a positive integer. | |
| priority | No | Task priority. 'high' marks it as important, 'average' unmarks it. | |
| creatorId | No | Identifier of the creator. Defaults to the current user. | |
| auditorIds | No | List of user IDs to add as auditors | |
| description | No | Task description | |
| deadlineDate | No | Task deadline in 'Y/m/d H:i' format | |
| parentTaskId | No | Identifier of the parent task to create a subtask. Must be a positive integer | |
| accompliceIds | No | List of user IDs to add as accomplices | |
| responsibleId | No | Identifier of the responsible user. Defaults to the current user. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure but fails to explain side effects, permission requirements for assigning other users (auditorIds/accompliceIds), or the significance of the 'supposedly_completed' status. It mentions 'Creates' implying mutation but lacks details on idempotency or default values.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single 9-word sentence that is technically efficient, but for an 11-parameter tool with complex domain logic, this brevity constitutes under-specification rather than effective conciseness. The phrase 'and other details' consumes space without conveying information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 11 parameters including entity relationships (subtasks via parentTaskId, group membership, multi-user assignments), no output schema, and no annotations, the description is inadequate. It fails to explain the task lifecycle (status values), user roles (auditor vs accomplice vs responsible), or the implications of group assignment.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description mentions 'title' specifically but reduces the other 10 parameters (including complex concepts like parentTaskId, auditorIds, accompliceIds) to the vague phrase 'other details', adding no semantic value about relationships (e.g., that parentTaskId creates a subtask) or domain-specific terms.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the basic action ('Creates a new task') and mentions the required 'title' parameter, but 'other details' is too vague to fully convey scope. It does not distinguish from sibling tools like update_task or clarify what constitutes a 'task' in this system versus other creatable entities like deals or checklists.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like update_task, or prerequisites such as requiring valid groupId/user IDs from other tools. There is no mention of when subtask creation (via parentTaskId) is appropriate versus standalone tasks.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
deal_category_listAInspect
Searches for CRM deals categories (funnels). Use this function when you need to find all categories (funnels) for deals or find the funnel identifier by funnel name.
| Name | Required | Description | Default |
|---|---|---|---|
| name | No | CRM deal category (funnel) name (minimum 2 characters). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses this is a search/read operation, but fails to mention critical behavioral details: that the 'name' parameter is optional (omitting it returns all funnels), or what the return structure contains (e.g., IDs and names).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely tight two-sentence structure: first sentence defines purpose, second defines usage context. No redundant or filler text. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple search tool with one optional parameter and no output schema, the description covers basic usage but has gaps. It should explicitly state that omitting the 'name' parameter returns all categories, and ideally hint at the return value structure (list of funnels with IDs) since no output schema exists to document this.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% coverage describing the 'name' parameter, the description adds valuable semantic context that this parameter is used to 'find the funnel identifier by funnel name,' clarifying the lookup intent beyond just the string validation rules in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Searches for CRM deals categories (funnels)' with a specific verb and resource. It effectively clarifies that 'categories' and 'funnels' are synonymous, helping distinguish this from the sibling tool 'deal_stage_list' which handles stages rather than categories/funnels.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use guidance: 'when you need to find all categories (funnels) for deals or find the funnel identifier by funnel name.' This clearly indicates both the bulk listing and lookup-by-name use cases. Lacks explicit 'when not to use' or alternative suggestions (e.g., distinguishing from 'deal_stage_list').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
deal_stage_listAInspect
Searches for CRM deal stages for deal in the specified category (funnel). Use this function when you need to find all stages for deal in the specified category (funnel) or find the stage identifier by stage name in the specified category (funnel).
| Name | Required | Description | Default |
|---|---|---|---|
| name | No | CRM deal stage name (minimum 2 characters). | |
| categoryId | Yes | Deal category (funnel) identifier. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It identifies the operation as a 'search' (implying read-only), but does not explicitly confirm safety, mention error handling for invalid category IDs, or describe the return structure (e.g., array of stages, identifiers only). It compensates partially by describing the optional name filtering behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with clear front-loading (action first). There is minor redundancy with the phrase 'in the specified category (funnel)' repeated twice in the second sentence, but overall it avoids waste and maintains focus on functional purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema and annotations, the description adequately covers the input parameters but leaves gaps regarding the return value structure. It mentions finding 'stage identifier[s]' which hints at the output, but for a search tool with no output schema, additional context about what data is returned (full stage objects vs. IDs only) would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Although schema description coverage is 100%, the description adds valuable semantic context beyond the schema. It clarifies that the 'name' parameter is used specifically to 'find the stage identifier by stage name,' explaining the lookup intent, and reinforces that 'categoryId' refers to the funnel context. This adds meaning to the raw parameter definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Searches for CRM deal stages' within a specified category/funnel, using specific verbs and resources. It distinguishes itself from sibling tools like 'deal_category_list' (which lists funnels) and 'lead_stage_list' (which targets leads rather than deals) by explicitly mentioning 'CRM deal stages' and the category context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit when-to-use guidance: 'Use this function when you need to find all stages... or find the stage identifier by stage name.' This covers both list-all and lookup-by-name scenarios. However, it does not explicitly mention alternatives like 'deal_category_list' for when users need funnel metadata rather than stage lists.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_accomplicesAInspect
Removes accomplices from the task. This action is irreversible.
| Name | Required | Description | Default |
|---|---|---|---|
| taskId | Yes | Identifier of the task. Must be a positive integer. | |
| accompliceIds | Yes | Array of user identifiers to remove from accomplices. Must not be an empty array. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully communicates the destructive/irreversible nature of the operation ('This action is irreversible'), but omits details about required permissions, error handling, or side effects on the task.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with zero waste. It is appropriately front-loaded with the primary action, followed by the critical safety warning about irreversibility.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 primitive parameters, no output schema), the description adequately covers the essential behavioral trait (irreversibility). It could be improved by mentioning return values or error cases (e.g., non-existent task), but it is sufficiently complete for an agent to invoke the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, documenting both 'taskId' and 'accompliceIds' comprehensively. The description adds no parameter-specific details, which is acceptable given the high schema coverage, meeting the baseline score of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the specific action ('Removes') and resource ('accomplices') clearly. It effectively distinguishes from sibling 'add_accomplices' by stating the opposite operation, and differs from 'delete_auditors' and 'delete_task' through the specific resource mentioned.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides an irreversibility warning which guides cautious usage, but does not explicitly state when to use this tool versus alternatives like 'add_accomplices' or 'delete_auditors', nor does it mention prerequisites or permissions required.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_auditorsAInspect
Removes auditors from the task. This action is irreversible.
| Name | Required | Description | Default |
|---|---|---|---|
| taskId | Yes | Identifier of the task. Must be a positive integer. | |
| auditorIds | Yes | Array of user identifiers to remove from auditors. Must not be an empty array. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses the destructive nature (irreversibility) but omits other behavioral details like permission requirements, what happens if auditorIds are invalid, or return value structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: one stating the action, one the irreversible consequence. Zero waste. The critical warning is appropriately included without bloating the description.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a straightforward 2-parameter deletion tool without output schema, the description is nearly complete. The irreversibility disclosure is essential. Minor gap: no mention of error behavior (e.g., partial success if some IDs invalid).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with both 'taskId' and 'auditorIds' fully documented in the input schema. The description adds no parameter-specific semantics, which is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Removes') and resource ('auditors from the task') that clearly distinguishes this tool from siblings like 'delete_accomplices' or 'delete_task'. The scope is unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The irreversibility warning ('This action is irreversible') provides implied usage guidance—suggesting not to use this if restoration might be needed—but lacks explicit when-to-use comparisons or named alternatives like 'add_auditors'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_check_listBInspect
Deletes an entire checklist or checklist item from a task. This action is irreversible.
| Name | Required | Description | Default |
|---|---|---|---|
| checkListId | Yes | Identifier of the checklist to delete. Must be a positive integer. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must carry full behavioral disclosure. It successfully communicates the destructive/irreversible nature but lacks details about side effects (cascade behavior on nested items), prerequisites, or return values.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no filler content; the irreversibility warning is appropriately placed. However, the conflation of 'checklist or checklist item' in the first sentence creates slight structural ambiguity given the sibling tool context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple single-parameter deletion tool without output schema, the description covers the essential operation and safety context. It lacks return value description or prerequisite conditions, but the irreversibility warning provides the critical extra context needed for a destructive operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% for the single parameter, documenting checkListId as a positive integer identifier. The description adds no parameter-specific semantics beyond the schema (examples, format constraints), warranting the baseline score for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the deletion action clearly but ambiguously claims it handles both 'entire checklist or checklist item' when the sibling tool 'delete_check_list_item' exists. Given the schema only accepts checkListId, this creates uncertainty about actual scope versus the sibling tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides the critical safety warning that the action is 'irreversible,' which guides cautious usage. However, it fails to clarify when to use this tool versus the sibling 'delete_check_list_item' despite implying overlapping functionality.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_check_list_itemAInspect
Deletes an entire checklist item from a task. This action is irreversible.
| Name | Required | Description | Default |
|---|---|---|---|
| itemId | No | Identifier of the checklist item to delete. Must be a positive integer. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden of behavioral disclosure. The explicit 'irreversible' warning is critical safety information for a destructive operation. Missing: permission requirements, task impact notifications, or success/failure semantics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first states the action, second states the critical risk. Front-loaded with the verb, no filler words. Perfect efficiency for the complexity level.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately complete for a single-parameter destructive operation with 100% schema coverage. The irreversibility warning covers primary risk. Lacks output description, but acceptable for a simple delete tool with no output schema provided.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (itemId fully documented), establishing baseline 3. Description adds context that the item belongs 'to a task' (not in schema), but doesn't elaborate on parameter semantics beyond schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Deletes' + resource 'checklist item' + scope 'from a task' clearly defines the operation. The phrase 'entire checklist item' effectively distinguishes this from sibling delete_check_list (which removes the whole list) and update_check_list_item (which modifies).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The irreversibility warning implies when NOT to use (avoid if preservation needed), but lacks explicit guidance on choosing between this and update_check_list_item for modifications, or when to prefer this over delete_check_list for cleanup.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_funnelCInspect
Deletes the funnel identified by categoryId for the entity with the entity type identifier entityTypeId
| Name | Required | Description | Default |
|---|---|---|---|
| categoryId | Yes | Identifier of the CRM funnel to be deleted | |
| entityTypeId | Yes | CRM entity type identifier. Possible values: 2 (Deal) or the identifier of a smart-process type |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. While 'Deletes' implies a destructive operation, the description fails to disclose critical behavioral traits: whether deletion cascades to stages/deals, if the operation is reversible, permission requirements, or side effects. This is a significant gap for a destructive CRM operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that is appropriately front-loaded with the action verb. It is syntactically slightly awkward ('for the entity with the entity type identifier...') but contains no redundant words or unnecessary fluff. Every word serves to connect the operation to its parameters.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple 2-parameter structure with complete schema coverage and no output schema, the description covers the basic invocation contract. However, for a destructive CRM funnel deletion, it lacks important operational context (e.g., deal/stage lifecycle implications) that would be necessary for safe agent usage. Adequate but with clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents both parameters adequately. The description references the parameter names (`categoryId`, `entityTypeId`) but adds minimal semantic value beyond the schema itself (e.g., it does not clarify why a funnel is identified by 'categoryId' rather than 'funnelId', nor expand on the smart-process entity types). Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific verb 'Deletes' and resource 'funnel', and references the exact parameter names (`categoryId`, `entityTypeId`) used in the schema. However, it does not explicitly differentiate from the sibling tool `delete_funnel_stage` (which deletes stages within funnels), though the tool name itself provides some distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, prerequisites (e.g., emptying the funnel first), or warnings about data loss. Given siblings include `move_deals_between_funnels` and `delete_funnel_stage`, the absence of guidance on deal migration or stage vs funnel deletion is notable.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_funnel_stageCInspect
Deletes the stage identified by stageId for the entity with the entity type identifier entityTypeId in the funnel identified by categoryId.
| Name | Required | Description | Default |
|---|---|---|---|
| stageId | Yes | Identifier of the funnel stage to be deleted. Must not be an empty string. | |
| categoryId | Yes | Identifier of the CRM entity funnel. Must be null if entityTypeId is 1 or 7. In all other cases, categoryId must not be null | |
| entityTypeId | Yes | CRM entity type identifier. Possible values: 1 (Lead), 2 (Deal), 7 (Proposal), 31 (Invoice), or an identifier of a smart-process type |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden for behavioral disclosure, yet it only describes the mechanical action without explaining consequences. It omits critical details such as whether deletion is permanent, what happens to deals currently in the stage, or required permissions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that conveys the core action, but it is unnecessarily wordy with repetitive phrases ('identified by' used twice, 'entity with the entity type identifier'). It could be more front-loaded by starting with the action rather than burying it in identifiers.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive operation with three required parameters and no output schema or annotations, the description is insufficient. It lacks explanation of side effects, return values, or error conditions that would help an agent invoke the tool safely and handle outcomes appropriately.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the parameters are fully documented in the schema itself. The description adds minimal relational context by mapping parameters to their domain roles (stage 'for' entity 'in' funnel), but does not add syntax details or usage constraints beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool deletes a funnel stage and identifies the specific resources involved (stage, entity type, funnel). It distinguishes itself from sibling 'delete_funnel' by specifying 'stage' and referencing stageId versus the funnel as a whole.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'rename_funnel_stages' or 'update_funnel_stages'. It fails to mention prerequisites such as whether deals must be moved out of the stage before deletion or if the operation is reversible.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_taskAInspect
Deletes the specified task. This action is irreversible.
| Name | Required | Description | Default |
|---|---|---|---|
| taskId | Yes | Identifier of the task to delete. Must be a positive integer. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully communicates the destructive, permanent nature of the operation. However, it omits critical details about side effects—specifically whether deleting a task cascades to delete related checklists, reminders, or accomplices (given the existence of separate 'delete_check_list' and 'delete_accomplices' tools).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences total: the first states the action, the second delivers a critical safety warning. Every word earns its place; no filler or redundant information. Perfectly front-loaded and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple input schema (single integer parameter) and lack of output schema, the description covers the basic minimum. However, given the rich ecosystem of sibling tools managing task relationships (checklists, reminders, auditors), the description is incomplete regarding cascading behavior or cleanup of related entities.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the 'taskId' parameter fully documented as 'Identifier of the task to delete'. The description references 'the specified task', which aligns with the schema but adds no additional semantic context or usage guidance beyond what the schema already provides. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Deletes') with a specific resource ('task'), clearly distinguishing this tool from siblings like 'create_task', 'update_task', and 'search_tasks'. The scope is immediately obvious.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The irreversibility warning ('This action is irreversible') provides implicit behavioral guidance, cautioning against accidental use. However, it lacks explicit guidance on when to use this versus alternatives like 'update_task' or 'detach_task_from_group', or prerequisites like ownership checks.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
detach_task_from_groupCInspect
Detaches a task from a group.
| Name | Required | Description | Default |
|---|---|---|---|
| taskId | No | Identifier of the task to update. Must be a positive integer. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but fails to explain what happens to the task after detachment (e.g., whether it becomes ungrouped, moves to a default group, or if the operation is reversible), nor does it indicate if this is a destructive action.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of a single five-word sentence with no extraneous information or redundancy, making it extremely concise. However, this extreme brevity comes at the cost of utility, as critical behavioral context is omitted.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite the simple single-parameter schema, the description is insufficient for a mutation tool with no output schema or annotations. It omits critical behavioral context such as error conditions, side effects, or the resulting state of the detached task after the operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema provides 100% description coverage for the single `taskId` parameter, documenting it as the 'Identifier of the task to update' with type constraints. Since the schema fully documents the parameter semantics, the description does not need to compensate beyond the baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Detaches a task from a group' merely converts the snake_case tool name into sentence form, adding no new information about the scope or nature of the operation. It fails to distinguish this tool from generic update operations or explain what constitutes a 'group' in this context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like `update_task` or `delete_task`, nor does it mention prerequisites such as task existence or user permissions required to modify task associations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_task_by_idCInspect
Retrieves full task data by its identifier.
| Name | Required | Description | Default |
|---|---|---|---|
| taskId | Yes | Identifier of the task. Must be a positive integer. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. While 'full task data' hints at response completeness, description omits error behavior (what happens if ID doesn't exist?), safety confirmation (read-only nature), and rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, six words. No wasted text, though extreme brevity limits information density. Front-loaded with verb and object.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Minimal viable description for a single-parameter lookup tool. Mentions 'full' data which compensates slightly for missing output schema, but lacks error handling documentation expected for a retrieval operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage ('Identifier of the task. Must be a positive integer.'). Description adds no parameter details, but with complete schema coverage, baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Retrieves') and resource ('full task data'), but lacks explicit differentiation from sibling tool 'search_tasks'. The phrase 'by its identifier' implies direct lookup versus search, but explicit comparison would strengthen selection.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like 'search_tasks'. No mention of prerequisites (e.g., knowing the task ID) or failure modes.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
lead_stage_listBInspect
Searches for CRM leads stages. Use this function when you need to find all stages for leads or find the stage identifier by stage name.
| Name | Required | Description | Default |
|---|---|---|---|
| name | No | CRM lead stage name (minimum 2 characters). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full disclosure burden. While 'Searches' implies a read-only operation, the description lacks details about return format, pagination behavior, search semantics (exact vs partial matching), or what occurs when no matches are found.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with zero waste. The first establishes purpose, the second establishes usage context. It is appropriately sized for a single-parameter lookup tool, though slightly more detail on the optional nature of the parameter could be front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has only one optional parameter with full schema coverage, the description adequately covers primary usage scenarios. However, with no output schema provided and no annotations, the description should ideally disclose the return structure or at least confirm it returns stage identifiers and names, rather than just implying the output through usage instructions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the schema already documents the 'name' parameter's constraints (minLength, maxLength). The description adds contextual value by clarifying the parameter is used to 'find the stage identifier', but does not explain that the parameter is optional (0 required parameters) or how the search behaves when the parameter is omitted or provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (Searches) and resource (CRM leads stages). It implicitly distinguishes from the sibling tool 'deal_stage_list' by specifying 'leads stages' rather than 'deal stages', though it doesn't explicitly contrast with creation-oriented siblings like 'create_new_funnel_stage'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit when-to-use guidance ('when you need to find all stages for leads or find the stage identifier by stage name'), covering both the list-all and search-by-name use cases. However, it lacks guidance on when NOT to use this (e.g., for deal stages) or error handling guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
move_deals_between_funnelsBInspect
Moves all deals (maximum 100) from one funnel identified by from to another funnel identified by to. Use this function when explicitly instructed to transfer deals between different funnels
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | Identifier of the funnel to which the deals will be transferred | |
| from | Yes | Identifier of the funnel from which the deals will be transferred |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full behavioral burden. It discloses the critical 100 deal limit, but omits details on failure modes (truncate vs error if >100), whether the operation is atomic, side effects on source funnel data, or the return value structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. The first sentence front-loads the action and constraints; the second provides usage context. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a bulk mutation tool with no output schema, the description covers the core operation and primary constraint (100 limit) but lacks critical safety context: it does not describe the outcome/result object, warn about irreversibility, or explain error conditions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. The description references parameters `from` and `to` in backticks mapping them to source/destination funnels, but adds no semantic depth beyond the schema's existing 'Identifier...' descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (moves), resource (deals), and scope (between funnels, max 100). It implicitly distinguishes from the sibling `move_deals_between_stages` by consistently using 'funnels' terminology, though explicit contrast would strengthen this.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The guidance 'when explicitly instructed to transfer deals' is overly restrictive and passive, failing to proactively guide the agent. It names no alternatives (e.g., `move_deals_between_stages`) and provides no 'when-not-to-use' or prerequisite warnings (e.g., regarding the 100 deal limit).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
move_deals_between_stagesAInspect
Moves all deals (maximum 100) from the stage identified by from to the stage identified by to within the funnel specified by categoryId. Use this function when explicitly instructed to transfer deals between different stages of the same funnel
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | Identifier of the stage to which the deals should be transferred | |
| from | Yes | Identifier of the stage from which the deals should be transferred | |
| categoryId | Yes | Identifier of the funnel within which deals are moved between stages |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are absent, so the description carries full disclosure burden. It adds the critical 'maximum 100' limit not present in schema, but omits what happens when exceeding 100 deals, whether the operation is atomic, or what the function returns. Adds some value but leaves significant behavioral gaps for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two tightly constructed sentences with zero redundancy. The first sentence front-loads the core action with parameter mapping; the second provides usage guidance. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given this is a bulk mutation operation with no annotations or output schema, the description should ideally address error handling, return values, or partial failure behavior. It adequately covers the 100-deal constraint but leaves the agent uncertain about success/failure feedback.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema fully documents each parameter. The description references parameters contextually ('stage identified by from', etc.), adding relational semantics between the three IDs, but doesn't add format examples or validation rules beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Moves all deals') and resources, clearly identifying the operation scope. It distinguishes from sibling 'move_deals_between_funnels' by emphasizing 'within the funnel' and 'same funnel', clarifying this is for intra-funnel moves only.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Use this function when explicitly instructed to transfer deals between different stages of the same funnel', providing clear context. However, it doesn't explicitly name the sibling alternative for cross-funnel moves or state the precondition that from/to must be different stages.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rename_funnel_stagesBInspect
Renames the stage names for the entity with an entity type identifier entityTypeId in the funnel identified by categoryId. Use this function when explicitly instructed to rename one or more stages
| Name | Required | Description | Default |
|---|---|---|---|
| names | Yes | Object containing information about the funnel stage and its new name | |
| categoryId | Yes | Identifier of the CRM entity funnel. Must be null if entityTypeId is 1 or 7. In all other cases, categoryId must not be null | |
| entityTypeId | Yes | CRM entity type identifier. Possible values: 1 (Lead), 2 (Deal), 7 (Proposal), 31 (Invoice), or an identifier of a smart-process type |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full disclosure burden. While 'renames' implies mutation, the description fails to disclose whether this operation is reversible, what happens if stageIds don't exist, whether it triggers side effects in the CRM, or any permission requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Exactly two sentences with zero redundancy. The first sentence front-loads the action and resource; the second provides usage context. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 100% schema coverage, the parameter constraints are documented in the structured fields, compensating for the lack of detail in the prose description. However, given the complex nested array structure of 'names' and important business logic constraints, the description could be more complete regarding error conditions or return behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description references the parameters (entityTypeId, categoryId) but adds no semantic clarification beyond the schema, such as explaining the complex conditional logic that categoryId must be null for entity types 1 and 7.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (renames) and target resource (stage names for entity/funnel) using the exact parameter names. It implicitly distinguishes from sibling 'rename_funnel_title' by focusing on 'stage names' rather than the funnel title itself, though it doesn't explicitly mention this distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The second sentence provides explicit guidance ('when explicitly instructed to rename'), but lacks contrapositive guidance (when NOT to use) or differentiation from the sibling 'update_funnel_stages' which may handle broader stage modifications beyond just names.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rename_funnel_titleAInspect
Renames the funnel identified by categoryId to title for an entity with the entity type identifier entityTypeId. Use this function to update the existing CRM funnel name
| Name | Required | Description | Default |
|---|---|---|---|
| title | Yes | New funnel title. Must not be an empty string | |
| categoryId | Yes | Identifier of the CRM entity sales funnel, also referred to as 'category' | |
| entityTypeId | Yes | CRM entity type identifier. Can be either 2 (Deal) or a smart-process type identifier |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It identifies the operation as an update/rename but lacks disclosure about error behavior (e.g., what if categoryId doesn't exist?), whether the operation is atomic, or any side effects on deals within the funnel. 'Update' implies mutation but lacks safety context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences with zero waste. The first sentence front-loads the core operation with parameter placeholders, while the second provides the usage context. No redundancy or filler content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 100% schema coverage and only 3 required parameters, the param semantics are well handled. However, lacking annotations and an output schema, the description omits return value documentation and error conditions which would help an agent handle the response appropriately. Adequate but not comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description maps parameter references into a coherent sentence (categoryId identifies the funnel, title is the new name) but doesn't add semantic details beyond what the schema already documents (e.g., no format constraints, business rules, or examples beyond schema descriptions).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Renames', 'update') and identifies the resource (CRM funnel) clearly. It references the key parameters (categoryId, title, entityTypeId) distinguishing this as a funnel-title operation versus sibling 'rename_funnel_stages', though it doesn't explicitly contrast with stage management tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The second sentence provides explicit guidance: 'Use this function to update the existing CRM funnel name'. This clearly defines the use case (updating existing vs creating new). However, it lacks explicit exclusions or named alternatives like 'create_default_funnel' for when this tool should not be used.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_tasksAInspect
Searches for tasks based on various criteria. Returns their identifiers, names and deadline. If you want to search for tasks by 'RESPONSIBLE_ID', 'CREATED_BY' or 'GROUP_ID', the value must be a numeric identifier, not a user's or group's name. If you don't know identifiers, use another tools for find it firstly.
| Name | Required | Description | Default |
|---|---|---|---|
| tag | No | Tag name to search for or null if not needed. | |
| title | No | Keyword to search in the task title or null if not needed. | |
| status | No | Task status to search for. By default, tasks in progress are searched. | |
| groupId | No | Identifier of the group. Must be a positive integer or null if not needed. | |
| memberId | No | Identifier of a task member. Leave null for the current user. | |
| auditorId | No | Identifier of the auditor. Must be a positive integer or null if not needed. | |
| creatorId | No | Identifier of the user who created the task. Must be a positive integer or null if not needed. | |
| deadlineTo | No | The end of the deadline range in 'Y/m/d H:i' format or null if not needed. | |
| description | No | Keyword to search in the task description or null if not needed. | |
| accompliceId | No | Identifier of the accomplice. Must be a positive integer or null if not needed. | |
| deadlineFrom | No | The start of the deadline range in 'Y/m/d H:i' format or null if not needed. | |
| responsibleId | No | Identifier of the responsible user. Must be a positive integer or null if not needed. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses return structure (identifiers, names, deadline) which compensates for missing output schema. Omits pagination limits, result ordering, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences covering purpose, returns, ID constraints, and prerequisites. Logical flow with front-loaded purpose. Minor grammatical awkwardness ('another tools', 'find it firstly') doesn't impede clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Good coverage for 12-parameter tool with 100% schema coverage. Description supplements schema with return value documentation and ID lookup prerequisites. Adequate without output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Adds critical semantic constraint: IDs must be numeric identifiers, not names—clarifying the integer type semantics beyond schema validation. Maps param concepts (RESPONSIBLE_ID, CREATED_BY) to schema fields despite minor mismatch.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb 'Searches' and resource 'tasks'. Specifies return values (identifiers, names, deadline). Could distinguish from sibling get_task_by_id (search vs specific lookup) but generally clear.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides prerequisite guidance: 'If you don't know identifiers, use another tools for find it firstly' (referencing search_users). However, lacks explicit when-to-use vs get_task_by_id or when to prefer filtering vs retrieving all.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_usersAInspect
Searches for a SINGLE user to get their ID by trying a list of possible name variations. Use this to find one user's ID, which can then be used in other tools.
| Name | Required | Description | Default |
|---|---|---|---|
| searchQueries | Yes | A list of name variations for the user you are trying to find. Provide different spellings, diminutive forms, or transliterations. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It adds valuable behavioral constraints by emphasizing 'SINGLE' user (managing expectations about result cardinality) and explaining the name variation search strategy. However, it omits critical behavioral details: error handling when no matches exist, how multiple matches are resolved, return value structure, and whether the operation is read-only.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. The first sentence front-loads the core purpose and unique constraint (SINGLE user). The second sentence provides usage context. Every word earns its place; capitalization of 'SINGLE' efficiently communicates a critical behavioral limitation without verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's singular purpose (ID resolution) and simple input schema (one array parameter), the description adequately covers intent and usage patterns. It compensates for the missing output schema by stating the tool retrieves an 'ID'. Minor gap: does not specify the ID's data type or format (string vs integer vs object), which would be helpful given the 'can then be used in other tools' dependency chain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description references 'list of possible name variations' which aligns with the searchQueries parameter semantics, but primarily restates information already present in the schema description ('different spellings, diminutive forms'). No additional syntax guidance or format constraints are added beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description explicitly states the tool 'Searches for a SINGLE user to get their ID by trying a list of possible name variations.' This provides specific verb (searches), resource (user), scope (single), output (ID), and method (name variations). It effectively distinguishes from sibling tools, as this is the only user lookup tool among task/deal management utilities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The second sentence 'Use this to find one user's ID, which can then be used in other tools' provides explicit guidance on when to invoke the tool (as a prerequisite step for user-dependent operations) and establishes the workflow context. However, it lacks explicit 'when not to use' guidance or named alternatives, though none appear to exist among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
send_chat_messageAInspect
Sends a free‐form message in the task's chat on behalf of the current user. Also can be used to leave a comment on a task.
| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes | Text to send. Must not be an empty string. | |
| taskId | Yes | Identifier of the task whose chat to post into. Must be a positive integer. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully clarifies the message is sent 'on behalf of the current user' (important authorization context) and targets the 'task's chat.' However, it omits other critical behavioral traits: whether notifications are triggered, if the operation is idempotent, error handling for invalid task IDs, and what success/failure looks like.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of exactly two efficient sentences with zero wasted words. It is front-loaded with the primary action ('Sends a free-form message') and immediately qualifies the actor ('on behalf of the current user') and secondary use case, earning its place with every phrase.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (2 parameters with 100% schema coverage) and absence of an output schema, the description adequately covers the essential context: the action, the delegate (current user), and the dual nature of the operation (chat vs. comment). It could be improved by mentioning success indicators or side effects like notifications, but it is sufficient for tool selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description adds modest semantic value by contextualizing the text parameter as a 'free-form' message or 'comment' and the taskId as referring to the 'task's chat,' but does not elaborate on format constraints, examples, or validation behavior beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool sends a 'free-form message in the task's chat' and can 'leave a comment,' providing specific verbs and resources. However, it does not explicitly differentiate from the sibling tool `add_task_result`, which could cause confusion about when to use chat messages versus task results.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies dual usage patterns (chat message vs. comment) with 'Also can be used,' suggesting flexibility. However, it lacks explicit guidance on when to choose this tool over alternatives like `add_task_result` or `update_task`, and provides no prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
set_daily_task_recurrenceBInspect
Sets up a daily recurrence for a task. Allows specifying an interval and can restrict the recurrence to workdays only.
| Name | Required | Description | Default |
|---|---|---|---|
| time | No | Time of day for the recurrence in 'HH:MM:SS' format. Default is 05:00. | |
| times | No | If repeatTill is "times" it must be a number of repeats | |
| taskId | Yes | Identifier of the task. Must be a positive integer. | |
| endDate | No | End date for recurrence if repeatTill is 'date', in 'Y/m/d H:i' format. | |
| everyDay | No | The number of days between repetitions. | |
| startDate | No | Recurrence start date, in Y/m/d H:i format. Default is save date. | |
| repeatTill | No | Condition for stopping recurrence. | |
| workdayOnly | No | If true, the task will only recur on workdays. Default is false. | |
| dailyMonthInterval | No | The interval in months between repetitions of the task. Default is 0. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, yet description fails to disclose mutation behavior (overwrites existing recurrence?), return values, or conditional parameter logic (repeatTill determines validity of times/endDate). Does not clarify confusing dailyMonthInterval parameter semantics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose. No redundancy or filler. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 9 parameters, complex conditional logic, no output schema, and no annotations, the description is incomplete. Fails to explain return structure, side effects, or the interaction between repeatTill, times, and endDate parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage, establishing baseline 3. Description mentions 'interval' and 'workdays' but doesn't clarify which parameter is which, nor the conditional relationships between repeatTill and dependent parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Sets up a daily recurrence for a task' with specific verb and resource. Explicitly 'daily' distinguishes it from sibling tools (set_weekly_task_recurrence, set_monthly_*, set_yearly_*).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this vs. weekly/monthly/yearly alternatives, nor prerequisites (e.g., task must exist). Relies entirely on tool name for differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
set_monthly_by_month_days_task_recurrenceBInspect
Sets up a monthly recurrence for a task based on a specific day of the month. Use this to make a task repeat on the same date every month or every few months.
| Name | Required | Description | Default |
|---|---|---|---|
| time | No | Time of day for the recurrence in 'HH:MM:SS' format. Default is 05:00. | |
| times | No | If repeatTill is "times" it must be a number of repeats | |
| taskId | Yes | Identifier of the task. Must be a positive integer. | |
| endDate | No | End date for recurrence if repeatTill is 'date', in 'Y/m/d H:i' format. | |
| startDate | No | Recurrence start date, in Y/m/d H:i format. Default is save date. | |
| repeatTill | No | Condition for stopping recurrence. | |
| workdayOnly | No | If true, the task will only recur on workdays. Default is false. | |
| monthlyDayNum | No | The date on which the task repetition in the month will be created. Starts with 1. | |
| monthlyMonthNum1 | No | The month on which the task repetition will be created. Starts with 0. | |
| monthlyMonthNum2 | No | The month in which the task repetition will be created in the week is specified here. Starts with 0. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. While 'Sets up' implies a write operation, the description lacks critical context: whether calling this overwrites existing recurrence settings, whether it's idempotent, what happens if the specified day doesn't exist in a given month (e.g., the 31st), or any side effects on the task.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two well-structured sentences with zero waste: the first establishes the core purpose upfront, and the second provides usage context. Every word earns its place, and the length is appropriate for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (10 parameters including ambiguously named interval controls) and lack of output schema or annotations, the description is insufficient. It fails to explain the relationship between `monthlyMonthNum1` and `monthlyMonthNum2`, doesn't clarify the recurrence interval logic beyond 'every few months', and omits error handling or edge case behavior for this scheduling mutation tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds minimal semantic value by referencing 'same date every month or every few months', which hints at the `monthlyDayNum` and interval parameters. However, it fails to clarify the confusing `monthlyMonthNum1` and `monthlyMonthNum2` parameters, which have cryptic schema descriptions mentioning 'starts with 0' and one erroneously referencing 'in the week'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Sets up' and clearly identifies the resource as 'monthly recurrence for a task based on a specific day of the month'. It effectively distinguishes from daily/weekly/yearly siblings by specifying 'monthly' and 'day of the month', though it doesn't explicitly contrast with the sibling tool for monthly-by-weekday recurrence.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implied usage context with 'Use this to make a task repeat on the same date every month', indicating the pattern for when to use it. However, it lacks explicit guidance on when NOT to use it versus the close sibling `set_monthly_by_week_days_task_recurrence` (e.g., for 'first Monday' patterns), leaving the agent to infer from the parameter names.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
set_monthly_by_week_days_task_recurrenceAInspect
Sets up a monthly recurrence for a task based on the day of the week and its order within the month. Use for complex patterns like "the second Friday of every month" or "the last Monday every 3 months".
| Name | Required | Description | Default |
|---|---|---|---|
| time | No | Time of day for the recurrence in 'HH:MM:SS' format. Default is 05:00. | |
| times | No | If repeatTill is "times" it must be a number of repeats | |
| taskId | Yes | Identifier of the task. Must be a positive integer. | |
| endDate | No | End date for recurrence if repeatTill is 'date', in 'Y/m/d H:i' format. | |
| startDate | No | Recurrence start date, in Y/m/d H:i format. Default is save date. | |
| repeatTill | No | Condition for stopping recurrence. | |
| workdayOnly | No | If true, the task will only recur on workdays. Default is false. | |
| monthlyWeekDay | No | The day of the week on which the task will be repeated is reflected. Starts with 0. | |
| monthlyMonthNum2 | No | The month in which the task repetition will be created in the week is specified here. Starts with 0. | |
| monthlyWeekDayNum | No | Which week the task will be repeated. Starts with 0. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must carry the full burden of behavioral disclosure. While the examples transparently illustrate the recurrence pattern logic, the description omits mutation details such as whether this overwrites existing recurrence settings, creates future task instances immediately, or requires specific permissions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two well-structured sentences with zero redundancy. The first sentence front-loads the core action and scope, while the second provides immediately useful pattern examples. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 10 parameters with 100% schema coverage and complex recurrence logic, the description provides sufficient conceptual context through examples. No output schema exists, and the description appropriately does not attempt to document return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description maps conceptual examples ('second Friday') to the parameter intent but does not add syntax details, validation rules, or explicit parameter mappings beyond what the schema already documents.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Sets up') and clearly identifies the resource ('monthly recurrence for a task'). It effectively distinguishes from the sibling tool `set_monthly_by_month_days_task_recurrence` by specifying 'based on the day of the week and its order within the month' versus date-based recurrence.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides concrete usage patterns ('second Friday of every month', 'last Monday every 3 months') that clarify when to use this tool. However, it lacks explicit guidance on when to prefer the sibling `set_monthly_by_month_days_task_recurrence` (for specific dates like the 15th) versus this week-day based approach.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
set_weekly_task_recurrenceAInspect
Sets up a weekly recurrence for a task. Allows selecting specific days of the week (e.g., Monday, Wednesday) and setting a weekly interval (e.g., every 2 weeks).
| Name | Required | Description | Default |
|---|---|---|---|
| time | No | Time of day for the recurrence in 'HH:MM:SS' format. Default is 05:00. | |
| times | No | If repeatTill is "times" it must be a number of repeats | |
| taskId | Yes | Identifier of the task. Must be a positive integer. | |
| endDate | No | End date for recurrence if repeatTill is 'date', in 'Y/m/d H:i' format. | |
| weekDays | No | The numbers of the days of the week. | |
| everyWeek | No | The number of weeks between repetitions. | |
| startDate | No | Recurrence start date, in Y/m/d H:i format. Default is save date. | |
| repeatTill | No | Condition for stopping recurrence. | |
| workdayOnly | No | If true, the task will only recur on workdays. Default is false. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It explains the weekly recurrence model (day selection and intervals) with helpful examples, but fails to disclose mutation semantics, idempotency, error conditions, or what happens to existing recurrence configurations when this is called.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is optimally structured with two focused sentences. The first establishes the core purpose, while the second details the key capabilities with examples. Every word earns its place; no redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (9 parameters controlling termination conditions, workday restrictions, and timing), the description is minimally viable. It covers the weekly-specific logic but omits context about termination modes (repeatTill options), workdayOnly behavior, or time settings, which are critical for proper invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% description coverage (baseline 3), the description adds valuable semantic context with concrete examples ('Monday, Wednesday', 'every 2 weeks') that help an LLM understand how to populate the weekDays and everyWeek parameters effectively.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (sets up weekly recurrence) and resource (task). It effectively distinguishes from siblings like set_daily_task_recurrence or set_monthly_by_week_days_task_recurrence by emphasizing week-specific features: 'specific days of the week' and 'weekly interval'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus the other recurrence variants (daily, monthly, yearly). It omits prerequisites (e.g., that the task must exist first) and doesn't indicate whether this overwrites existing recurrence settings or how to stop recurrence.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
set_yearly_by_month_days_task_recurrenceAInspect
Sets up a yearly (annual) recurrence for a task on a specific calendar date. Use this to make a task repeat on the same day and month each year, such as "every April 25th".
| Name | Required | Description | Default |
|---|---|---|---|
| time | No | Time of day for the recurrence in 'HH:MM:SS' format. Default is 05:00. | |
| times | No | If repeatTill is "times" it must be a number of repeats | |
| taskId | Yes | Identifier of the task. Must be a positive integer. | |
| endDate | No | End date for recurrence if repeatTill is 'date', in 'Y/m/d H:i' format. | |
| startDate | No | Recurrence start date, in Y/m/d H:i format. Default is save date. | |
| repeatTill | No | Condition for stopping recurrence. | |
| workdayOnly | No | If true, the task will only recur on workdays. Default is false. | |
| yearlyDayNum | No | The date on which the task repetition in the month will be created. Starts with 1. | |
| yearlyMonth1 | No | The month on which the task repetition will be created. Starts with 0. | |
| yearlyMonth2 | No | The month in which the task repetition will be created in the week is specified here. Starts with 0. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It fails to mention that this is a mutating operation, whether it overwrites existing recurrence patterns, or what happens on success/failure. The description only covers the recurrence pattern logic, not the operational side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences. The first front-loads the core action, and the second provides a concrete usage example. There is no redundant or filler text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 10 parameters and complex recurrence logic, the description is minimally adequate but has gaps. It does not address mutation side effects, return values (though no output schema exists), or the confusing yearlyMonth2 parameter. The 100% schema coverage compensates for some gaps, but behavioral context is missing.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds semantic context through the 'April 25th' example, clarifying the date parameters (yearlyDayNum, yearlyMonth1). However, it does not resolve the ambiguity between yearlyMonth1 and yearlyMonth2 (whose schema description confusingly mentions 'in the week'), nor explain the interaction between repeatTill, times, and endDate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool sets up 'yearly (annual) recurrence for a task on a specific calendar date' and provides a concrete example ('every April 25th'). The phrase 'specific calendar date' effectively distinguishes this from the sibling tool set_yearly_by_week_days_task_recurrence.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use the tool ('Use this to make a task repeat on the same day and month each year'). However, it does not explicitly mention when NOT to use it (e.g., for recurring on specific weekdays like 'first Monday of April') or name the alternative sibling tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
set_yearly_by_week_days_task_recurrenceAInspect
Sets up a yearly recurrence for a task based on the day of the week and its order within a specific month. Use for complex annual patterns like "the fourth Thursday of November" every year.
| Name | Required | Description | Default |
|---|---|---|---|
| time | No | Time of day for the recurrence in 'HH:MM:SS' format. Default is 05:00. | |
| times | No | If repeatTill is "times" it must be a number of repeats | |
| taskId | Yes | Identifier of the task. Must be a positive integer. | |
| endDate | No | End date for recurrence if repeatTill is 'date', in 'Y/m/d H:i' format. | |
| startDate | No | Recurrence start date, in Y/m/d H:i format. Default is save date. | |
| repeatTill | No | Condition for stopping recurrence. | |
| workdayOnly | No | If true, the task will only recur on workdays. Default is false. | |
| yearlyMonth2 | No | The month in which the task repetition will be created in the week is specified here. Starts with 0. | |
| yearlyWeekDay | No | The day of the week on which the task repetition will be created. Starts with 0. | |
| yearlyWeekDayNum | No | In which week the task will be repeated. Starts with 0. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry full behavioral disclosure. It fails to mention whether this overwrites existing recurrence patterns, what the return value indicates, required permissions, or idempotency behavior. The agent cannot determine if this is destructive or safe to retry.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences total: first establishes purpose, second provides concrete usage example. Every sentence earns its place with zero redundancy or filler. Front-loaded with the specific operation type immediately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately covers the core recurrence pattern logic, but with 10 parameters, no output schema, and complex conditional logic (repeatTill determining which end-params are active), the description should explain parameter interdependencies or at least flag the 0-indexing convention for month/weekday values to prevent off-by-one errors.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, baseline is 3. The description conceptually maps 'day of the week and its order' to parameters (yearlyWeekDay/yearlyWeekDayNum), adding semantic context. However, it omits critical usage details present in the schema like 0-indexing ('Starts with 0') and the conditional relationship between repeatTill, times, and endDate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it 'Sets up a yearly recurrence for a task based on the day of the week and its order within a specific month' - specific verb, resource, and mechanism. The 'fourth Thursday of November' example perfectly distinguishes this from sibling tools like set_yearly_by_month_days_task_recurrence or set_monthly_by_week_days_task_recurrence.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use guidance ('Use for complex annual patterns like...'), which helps the agent select this over simpler recurrence tools. However, lacks explicit when-not-to-use guidance or named alternatives (e.g., 'use set_yearly_by_month_days_task_recurrence for fixed dates like January 15th').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_check_listCInspect
Updates the name or order of an existing checklist or checklist item.
| Name | Required | Description | Default |
|---|---|---|---|
| title | No | New title. Must be a non-empty string or null to leave it unchanged. | |
| sortIndex | No | New sort index for reordering the checklists. Must be a non-negative integer or null to leave it unchanged. | |
| checkListId | Yes | Identifier of the checklist. Must be a positive integer. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full disclosure burden. It only states 'Updates' without clarifying side effects, error handling if null values are provided for both optional fields, or whether the operation is idempotent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence is appropriately concise and front-loaded with the verb 'Updates', but wastes precision by erroneously mentioning 'checklist item' which likely contradicts the intended scope given the parameter schema and sibling tools.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Identifies the updatable fields (name/order) for a checklist mutation, but fails to resolve the ambiguity with checklist item management or provide behavioral details expected for a mutation tool lacking annotations and output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with detailed definitions for title, sortIndex, and checkListId. The description maps 'name' to title and 'order' to sortIndex but adds no syntax guidance, validation details, or usage examples beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States 'Updates the name or order' providing a specific verb and resource, but ambiguously includes 'checklist item' when the schema only accepts checkListId and the sibling tool update_check_list_item exists separately. This creates confusion about whether the tool operates on items or the checklist itself.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus the sibling update_check_list_item, nor any prerequisites (e.g., obtaining checkListId). No alternatives, exclusions, or contextual conditions are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_check_list_itemAInspect
Updates the name or order of an existing checklist item.
| Name | Required | Description | Default |
|---|---|---|---|
| title | No | New title. Must be a non-empty string or null to leave it unchanged. | |
| itemId | No | Identifier of the checklist item. Must be a positive integer. | |
| sortIndex | No | New sort index for reordering the checklist items. Must be a non-negative integer or null to leave it unchanged. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must carry the full burden. It discloses this is a mutation operation supporting partial updates ('name or order'), but lacks disclosure on error conditions (e.g., item not found), idempotency, side effects, or return values.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with zero waste. Front-loaded with the verb and subject. Every word earns its place without repetition of schema details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 100% schema coverage and the tool's narrow scope, the description is sufficient. One point deducted because with no annotations and no output schema, it could briefly mention error conditions or the requirement that at least one field must be provided for a meaningful update.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds valuable semantic mapping by referring to 'name' (clarifying 'title') and 'order' (clarifying 'sortIndex'), helping agents understand domain terminology beyond raw field names.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Updates') with a clear resource ('checklist item') and scope ('name or order'). The word 'existing' distinguishes this from create_check_list_item, and 'item' distinguishes it from update_check_list (the parent list).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The word 'existing' implies the item must already exist, distinguishing from create_check_list_item. However, there is no explicit guidance on when to update vs delete, nor prerequisites like required identifiers (though schema implies optional, description doesn't clarify this).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_funnel_stagesBInspect
Updates the 'In Progress' stages for the funnel identified by categoryId. The default stages remain unchanged. Use this function to modify the custom stages of an existing CRM funnel.
| Name | Required | Description | Default |
|---|---|---|---|
| stages | Yes | Array of stage definitions (cannot be empty) | |
| categoryId | Yes | Identifier of the target funnel |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full disclosure burden. While it clarifies that default stages remain unchanged, it fails to describe critical mutation behaviors: whether this replaces the entire custom stages array or appends to it, what happens to deals currently in stages that are removed/modified, or whether the operation is reversible. For an update tool with array parameters, this omits essential safety context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences, front-loaded with the primary action. There is minor redundancy between sentence 1 ('Updates...stages') and sentence 3 ('Use this function to modify...stages'), but overall it is efficient and every sentence contributes either scope clarification or usage guidance.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool updates a nested array structure (stages) via replacement semantics implied by the array type, the description adequately covers the 'what' but lacks the 'how' regarding data integrity and side effects. Without an output schema or annotations, and being a mutation operation, it should disclose more about behavioral implications to be complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description explicitly references `categoryId` (tying it to 'funnel identified by'), adding slight semantic value. However, it does not add significant domain context beyond the schema's 'stage definitions' and 'In Progress' terminology already implied by the tool name.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly specifies the verb (Updates), resource ('In Progress' stages), and scope (custom stages of an existing funnel identified by categoryId). It distinguishes default from custom stages, helping differentiate from creation tools. However, it does not explicitly differentiate from the sibling tool `rename_funnel_stages`.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by stating 'Use this function to modify the custom stages of an existing CRM funnel' and notes that default stages remain unchanged. However, it lacks explicit guidance on when to use this versus `create_funnel_with_custom_stages` or `rename_funnel_stages`, and provides no information about prerequisites (e.g., whether the funnel must already exist).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_taskCInspect
Updates an existing task with the provided data. Requires task ID.
| Name | Required | Description | Default |
|---|---|---|---|
| title | No | New title. Null to leave unchanged | |
| status | No | New status. Null to leave unchanged. | |
| taskId | Yes | Identifier of the task to update. Must be a positive integer. | |
| groupId | No | New group ID. Must be a positive integer. Null to leave unchanged. To detach from a group, use a dedicated tool. | |
| priority | No | New priority. 'high' marks it as important, 'average' unmarks it.Null to leave unchanged. | |
| creatorId | No | New creator's user ID. Must be a positive integer. Null to leave unchanged. | |
| description | No | New description. Null to leave unchanged. | |
| deadlineDate | No | New deadline in 'Y/m/d H:i' format. Null to leave unchanged. To clear the deadline, use a dedicated tool. | |
| parentTaskId | No | New parent task ID. Must be a positive integer. Null to leave unchanged. | |
| responsibleId | No | New responsible user's ID. Must be a positive integer. Null to leave unchanged. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, placing full burden on description. It omits critical behavioral details: partial vs full update semantics, that null means 'unchanged' rather than 'cleared', restrictions on clearing deadlines/detaching groups, and what the return value or error conditions are.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise at two sentences with the action front-loaded. While no words are wasted, the brevity contributes to under-specification given the tool's complexity and lack of supporting annotations.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Inadequate for a 10-parameter mutation tool with no annotations and no output schema. The description omits essential context about the PATCH-like behavior, the existence of specialized sibling tools for specific operations, and lacks any indication of success/failure responses.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing baseline 3. Description adds minimal value beyond schema, only redundantly noting that taskId is required (already indicated by schema's 'required' array and parameter description).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Updates') and resource ('task'), with 'existing' implying the target must already be present. However, it fails to clarify the partial-update semantics (null values leave fields unchanged) which is crucial for correct usage given the 9 optional parameters.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Mentions the prerequisite 'Requires task ID' but provides no guidance on when to use specialized sibling tools (clear_task_deadline, detach_task_from_group) instead of this tool, despite the schema indicating these operations are restricted/require dedicated tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!